2025-05-17 00:00:07.351377 | Job console starting 2025-05-17 00:00:07.363987 | Updating git repos 2025-05-17 00:00:07.443524 | Cloning repos into workspace 2025-05-17 00:00:07.606414 | Restoring repo states 2025-05-17 00:00:07.639708 | Merging changes 2025-05-17 00:00:07.639754 | Checking out repos 2025-05-17 00:00:07.968351 | Preparing playbooks 2025-05-17 00:00:08.747100 | Running Ansible setup 2025-05-17 00:00:14.851582 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-17 00:00:15.894196 | 2025-05-17 00:00:15.894429 | PLAY [Base pre] 2025-05-17 00:00:15.959382 | 2025-05-17 00:00:15.960444 | TASK [Setup log path fact] 2025-05-17 00:00:16.016864 | orchestrator | ok 2025-05-17 00:00:16.065639 | 2025-05-17 00:00:16.065878 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-17 00:00:16.109729 | orchestrator | ok 2025-05-17 00:00:16.128555 | 2025-05-17 00:00:16.128718 | TASK [emit-job-header : Print job information] 2025-05-17 00:00:16.197344 | # Job Information 2025-05-17 00:00:16.197544 | Ansible Version: 2.16.14 2025-05-17 00:00:16.197580 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-05-17 00:00:16.197615 | Pipeline: periodic-midnight 2025-05-17 00:00:16.197639 | Executor: 521e9411259a 2025-05-17 00:00:16.197659 | Triggered by: https://github.com/osism/testbed 2025-05-17 00:00:16.197681 | Event ID: def5b8c5e8714326afd3bb9bb76d5cf8 2025-05-17 00:00:16.206650 | 2025-05-17 00:00:16.206792 | LOOP [emit-job-header : Print node information] 2025-05-17 00:00:16.373354 | orchestrator | ok: 2025-05-17 00:00:16.373570 | orchestrator | # Node Information 2025-05-17 00:00:16.373605 | orchestrator | Inventory Hostname: orchestrator 2025-05-17 00:00:16.373631 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-17 00:00:16.373654 | orchestrator | Username: zuul-testbed01 2025-05-17 00:00:16.373675 | orchestrator | Distro: Debian 12.10 2025-05-17 00:00:16.373700 | orchestrator | Provider: static-testbed 2025-05-17 00:00:16.373721 | orchestrator | Region: 2025-05-17 00:00:16.373742 | orchestrator | Label: testbed-orchestrator 2025-05-17 00:00:16.373762 | orchestrator | Product Name: OpenStack Nova 2025-05-17 00:00:16.373782 | orchestrator | Interface IP: 81.163.193.140 2025-05-17 00:00:16.391884 | 2025-05-17 00:00:16.392026 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-17 00:00:16.984345 | orchestrator -> localhost | changed 2025-05-17 00:00:17.005542 | 2025-05-17 00:00:17.005706 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-17 00:00:18.087682 | orchestrator -> localhost | changed 2025-05-17 00:00:18.107456 | 2025-05-17 00:00:18.107609 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-17 00:00:18.373982 | orchestrator -> localhost | ok 2025-05-17 00:00:18.381148 | 2025-05-17 00:00:18.381285 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-17 00:00:18.403858 | orchestrator | ok 2025-05-17 00:00:18.419419 | orchestrator | included: /var/lib/zuul/builds/638702988b704f0fb3d99ff5b9aee4e6/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-17 00:00:18.427604 | 2025-05-17 00:00:18.427705 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-17 00:00:19.788095 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-17 00:00:19.788301 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/638702988b704f0fb3d99ff5b9aee4e6/work/638702988b704f0fb3d99ff5b9aee4e6_id_rsa 2025-05-17 00:00:19.788343 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/638702988b704f0fb3d99ff5b9aee4e6/work/638702988b704f0fb3d99ff5b9aee4e6_id_rsa.pub 2025-05-17 00:00:19.788377 | orchestrator -> localhost | The key fingerprint is: 2025-05-17 00:00:19.788418 | orchestrator -> localhost | SHA256:RvzncMq/8Ye6M4ssEYnAoo9Wnf8Zd2MmSTDqlDeYhV0 zuul-build-sshkey 2025-05-17 00:00:19.788445 | orchestrator -> localhost | The key's randomart image is: 2025-05-17 00:00:19.788478 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-17 00:00:19.788501 | orchestrator -> localhost | | . o .E | 2025-05-17 00:00:19.788523 | orchestrator -> localhost | | . o o = | 2025-05-17 00:00:19.788543 | orchestrator -> localhost | | . .....O.o | 2025-05-17 00:00:19.788563 | orchestrator -> localhost | |. . o.Bo+ . | 2025-05-17 00:00:19.788583 | orchestrator -> localhost | | o. + S.= + | 2025-05-17 00:00:19.788613 | orchestrator -> localhost | |... +.o X = | 2025-05-17 00:00:19.788635 | orchestrator -> localhost | |. ..* B .. | 2025-05-17 00:00:19.788656 | orchestrator -> localhost | | .+ ooo. .| 2025-05-17 00:00:19.788677 | orchestrator -> localhost | | .o *B.. | 2025-05-17 00:00:19.788698 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-17 00:00:19.788748 | orchestrator -> localhost | ok: Runtime: 0:00:00.933434 2025-05-17 00:00:19.795978 | 2025-05-17 00:00:19.796086 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-17 00:00:19.824625 | orchestrator | ok 2025-05-17 00:00:19.834268 | orchestrator | included: /var/lib/zuul/builds/638702988b704f0fb3d99ff5b9aee4e6/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-17 00:00:19.843339 | 2025-05-17 00:00:19.843425 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-17 00:00:19.866715 | orchestrator | skipping: Conditional result was False 2025-05-17 00:00:19.874352 | 2025-05-17 00:00:19.874449 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-17 00:00:20.431860 | orchestrator | changed 2025-05-17 00:00:20.442286 | 2025-05-17 00:00:20.442399 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-17 00:00:20.722693 | orchestrator | ok 2025-05-17 00:00:20.731441 | 2025-05-17 00:00:20.731561 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-17 00:00:21.124550 | orchestrator | ok 2025-05-17 00:00:21.130338 | 2025-05-17 00:00:21.130426 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-17 00:00:21.509349 | orchestrator | ok 2025-05-17 00:00:21.517723 | 2025-05-17 00:00:21.517839 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-17 00:00:21.532115 | orchestrator | skipping: Conditional result was False 2025-05-17 00:00:21.539097 | 2025-05-17 00:00:21.539190 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-17 00:00:21.958730 | orchestrator -> localhost | changed 2025-05-17 00:00:21.989456 | 2025-05-17 00:00:21.989641 | TASK [add-build-sshkey : Add back temp key] 2025-05-17 00:00:22.318386 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/638702988b704f0fb3d99ff5b9aee4e6/work/638702988b704f0fb3d99ff5b9aee4e6_id_rsa (zuul-build-sshkey) 2025-05-17 00:00:22.318710 | orchestrator -> localhost | ok: Runtime: 0:00:00.008743 2025-05-17 00:00:22.333749 | 2025-05-17 00:00:22.333894 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-17 00:00:22.739746 | orchestrator | ok 2025-05-17 00:00:22.747823 | 2025-05-17 00:00:22.747937 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-17 00:00:22.771749 | orchestrator | skipping: Conditional result was False 2025-05-17 00:00:22.814754 | 2025-05-17 00:00:22.814898 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-17 00:00:23.193561 | orchestrator | ok 2025-05-17 00:00:23.209153 | 2025-05-17 00:00:23.209283 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-17 00:00:23.238311 | orchestrator | ok 2025-05-17 00:00:23.245443 | 2025-05-17 00:00:23.245553 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-17 00:00:23.560669 | orchestrator -> localhost | ok 2025-05-17 00:00:23.578989 | 2025-05-17 00:00:23.579149 | TASK [validate-host : Collect information about the host] 2025-05-17 00:00:24.837658 | orchestrator | ok 2025-05-17 00:00:24.865521 | 2025-05-17 00:00:24.868657 | TASK [validate-host : Sanitize hostname] 2025-05-17 00:00:24.951034 | orchestrator | ok 2025-05-17 00:00:24.957597 | 2025-05-17 00:00:24.958282 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-17 00:00:26.610951 | orchestrator -> localhost | changed 2025-05-17 00:00:26.619819 | 2025-05-17 00:00:26.620121 | TASK [validate-host : Collect information about zuul worker] 2025-05-17 00:00:27.548495 | orchestrator | ok 2025-05-17 00:00:27.554795 | 2025-05-17 00:00:27.556970 | TASK [validate-host : Write out all zuul information for each host] 2025-05-17 00:00:28.582573 | orchestrator -> localhost | changed 2025-05-17 00:00:28.612094 | 2025-05-17 00:00:28.612217 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-17 00:00:28.894952 | orchestrator | ok 2025-05-17 00:00:28.910338 | 2025-05-17 00:00:28.910460 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-17 00:00:46.366525 | orchestrator | changed: 2025-05-17 00:00:46.366891 | orchestrator | .d..t...... src/ 2025-05-17 00:00:46.366936 | orchestrator | .d..t...... src/github.com/ 2025-05-17 00:00:46.366962 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-17 00:00:46.366984 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-17 00:00:46.367005 | orchestrator | RedHat.yml 2025-05-17 00:00:46.376912 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-17 00:00:46.376930 | orchestrator | RedHat.yml 2025-05-17 00:00:46.376983 | orchestrator | = 2.2.0"... 2025-05-17 00:01:01.385650 | orchestrator | 00:01:01.385 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-17 00:01:01.456730 | orchestrator | 00:01:01.456 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-05-17 00:01:03.080662 | orchestrator | 00:01:03.080 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-05-17 00:01:03.858711 | orchestrator | 00:01:03.858 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-05-17 00:01:05.109227 | orchestrator | 00:01:05.108 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-17 00:01:06.193533 | orchestrator | 00:01:06.192 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-17 00:01:07.835500 | orchestrator | 00:01:07.835 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-05-17 00:01:08.997914 | orchestrator | 00:01:08.997 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-05-17 00:01:08.998086 | orchestrator | 00:01:08.997 STDOUT terraform: Providers are signed by their developers. 2025-05-17 00:01:08.998132 | orchestrator | 00:01:08.997 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-17 00:01:08.998146 | orchestrator | 00:01:08.997 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-17 00:01:08.998158 | orchestrator | 00:01:08.997 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-17 00:01:08.998179 | orchestrator | 00:01:08.997 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-17 00:01:08.998195 | orchestrator | 00:01:08.998 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-17 00:01:08.998207 | orchestrator | 00:01:08.998 STDOUT terraform: you run "tofu init" in the future. 2025-05-17 00:01:08.998227 | orchestrator | 00:01:08.998 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-17 00:01:08.998239 | orchestrator | 00:01:08.998 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-17 00:01:08.998254 | orchestrator | 00:01:08.998 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-17 00:01:08.998268 | orchestrator | 00:01:08.998 STDOUT terraform: should now work. 2025-05-17 00:01:08.998366 | orchestrator | 00:01:08.998 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-17 00:01:08.998381 | orchestrator | 00:01:08.998 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-17 00:01:08.998442 | orchestrator | 00:01:08.998 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-17 00:01:09.157779 | orchestrator | 00:01:09.157 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-05-17 00:01:09.357186 | orchestrator | 00:01:09.356 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-17 00:01:09.357281 | orchestrator | 00:01:09.356 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-17 00:01:09.357297 | orchestrator | 00:01:09.356 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-17 00:01:09.357309 | orchestrator | 00:01:09.356 STDOUT terraform: for this configuration. 2025-05-17 00:01:09.579175 | orchestrator | 00:01:09.578 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-05-17 00:01:09.671449 | orchestrator | 00:01:09.671 STDOUT terraform: ci.auto.tfvars 2025-05-17 00:01:09.673943 | orchestrator | 00:01:09.673 STDOUT terraform: default_custom.tf 2025-05-17 00:01:09.848281 | orchestrator | 00:01:09.848 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-05-17 00:01:10.866240 | orchestrator | 00:01:10.866 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-17 00:01:11.360387 | orchestrator | 00:01:11.360 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-17 00:01:11.536985 | orchestrator | 00:01:11.536 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-17 00:01:11.537059 | orchestrator | 00:01:11.536 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-17 00:01:11.537066 | orchestrator | 00:01:11.536 STDOUT terraform:  + create 2025-05-17 00:01:11.537072 | orchestrator | 00:01:11.536 STDOUT terraform:  <= read (data resources) 2025-05-17 00:01:11.537076 | orchestrator | 00:01:11.537 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-17 00:01:11.537128 | orchestrator | 00:01:11.537 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-17 00:01:11.537164 | orchestrator | 00:01:11.537 STDOUT terraform:  # (config refers to values not yet known) 2025-05-17 00:01:11.537184 | orchestrator | 00:01:11.537 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-17 00:01:11.537213 | orchestrator | 00:01:11.537 STDOUT terraform:  + checksum = (known after apply) 2025-05-17 00:01:11.537250 | orchestrator | 00:01:11.537 STDOUT terraform:  + created_at = (known after apply) 2025-05-17 00:01:11.537278 | orchestrator | 00:01:11.537 STDOUT terraform:  + file = (known after apply) 2025-05-17 00:01:11.537325 | orchestrator | 00:01:11.537 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.537342 | orchestrator | 00:01:11.537 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.537376 | orchestrator | 00:01:11.537 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-17 00:01:11.537403 | orchestrator | 00:01:11.537 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-17 00:01:11.537446 | orchestrator | 00:01:11.537 STDOUT terraform:  + most_recent = true 2025-05-17 00:01:11.537525 | orchestrator | 00:01:11.537 STDOUT terraform:  + name = (known after apply) 2025-05-17 00:01:11.537532 | orchestrator | 00:01:11.537 STDOUT terraform:  + protected = (known after apply) 2025-05-17 00:01:11.537566 | orchestrator | 00:01:11.537 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.537609 | orchestrator | 00:01:11.537 STDOUT terraform:  + schema = (known after apply) 2025-05-17 00:01:11.537631 | orchestrator | 00:01:11.537 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-17 00:01:11.537691 | orchestrator | 00:01:11.537 STDOUT terraform:  + tags = (known after apply) 2025-05-17 00:01:11.537698 | orchestrator | 00:01:11.537 STDOUT terraform:  + updated_at = (known after apply) 2025-05-17 00:01:11.537720 | orchestrator | 00:01:11.537 STDOUT terraform:  } 2025-05-17 00:01:11.537775 | orchestrator | 00:01:11.537 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-17 00:01:11.537798 | orchestrator | 00:01:11.537 STDOUT terraform:  # (config refers to values not yet known) 2025-05-17 00:01:11.537834 | orchestrator | 00:01:11.537 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-17 00:01:11.537885 | orchestrator | 00:01:11.537 STDOUT terraform:  + checksum = (known after apply) 2025-05-17 00:01:11.537941 | orchestrator | 00:01:11.537 STDOUT terraform:  + created_at = (known after apply) 2025-05-17 00:01:11.537961 | orchestrator | 00:01:11.537 STDOUT terraform:  + file = (known after apply) 2025-05-17 00:01:11.537990 | orchestrator | 00:01:11.537 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.538041 | orchestrator | 00:01:11.537 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.538079 | orchestrator | 00:01:11.538 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-17 00:01:11.538121 | orchestrator | 00:01:11.538 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-17 00:01:11.538126 | orchestrator | 00:01:11.538 STDOUT terraform:  + most_recent = true 2025-05-17 00:01:11.538153 | orchestrator | 00:01:11.538 STDOUT terraform:  + name = (known after apply) 2025-05-17 00:01:11.538181 | orchestrator | 00:01:11.538 STDOUT terraform:  + protected = (known after apply) 2025-05-17 00:01:11.538205 | orchestrator | 00:01:11.538 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.538239 | orchestrator | 00:01:11.538 STDOUT terraform:  + schema = (known after apply) 2025-05-17 00:01:11.538298 | orchestrator | 00:01:11.538 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-17 00:01:11.538318 | orchestrator | 00:01:11.538 STDOUT terraform:  + tags = (known after apply) 2025-05-17 00:01:11.538354 | orchestrator | 00:01:11.538 STDOUT terraform:  + updated_at = (known after apply) 2025-05-17 00:01:11.538374 | orchestrator | 00:01:11.538 STDOUT terraform:  } 2025-05-17 00:01:11.538407 | orchestrator | 00:01:11.538 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-17 00:01:11.538455 | orchestrator | 00:01:11.538 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-17 00:01:11.538485 | orchestrator | 00:01:11.538 STDOUT terraform:  + content = (known after apply) 2025-05-17 00:01:11.538539 | orchestrator | 00:01:11.538 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-17 00:01:11.538564 | orchestrator | 00:01:11.538 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-17 00:01:11.538621 | orchestrator | 00:01:11.538 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-17 00:01:11.538640 | orchestrator | 00:01:11.538 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-17 00:01:11.538681 | orchestrator | 00:01:11.538 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-17 00:01:11.538718 | orchestrator | 00:01:11.538 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-17 00:01:11.538750 | orchestrator | 00:01:11.538 STDOUT terraform:  + directory_permission = "0777" 2025-05-17 00:01:11.538787 | orchestrator | 00:01:11.538 STDOUT terraform:  + file_permission = "0644" 2025-05-17 00:01:11.538816 | orchestrator | 00:01:11.538 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-17 00:01:11.538872 | orchestrator | 00:01:11.538 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.538878 | orchestrator | 00:01:11.538 STDOUT terraform:  } 2025-05-17 00:01:11.538913 | orchestrator | 00:01:11.538 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-17 00:01:11.538954 | orchestrator | 00:01:11.538 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-17 00:01:11.538976 | orchestrator | 00:01:11.538 STDOUT terraform:  + content = (known after apply) 2025-05-17 00:01:11.539012 | orchestrator | 00:01:11.538 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-17 00:01:11.539047 | orchestrator | 00:01:11.539 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-17 00:01:11.539087 | orchestrator | 00:01:11.539 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-17 00:01:11.539134 | orchestrator | 00:01:11.539 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-17 00:01:11.539168 | orchestrator | 00:01:11.539 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-17 00:01:11.539204 | orchestrator | 00:01:11.539 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-17 00:01:11.539233 | orchestrator | 00:01:11.539 STDOUT terraform:  + directory_permission = "0777" 2025-05-17 00:01:11.539279 | orchestrator | 00:01:11.539 STDOUT terraform:  + file_permission = "0644" 2025-05-17 00:01:11.539285 | orchestrator | 00:01:11.539 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-17 00:01:11.539330 | orchestrator | 00:01:11.539 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.539336 | orchestrator | 00:01:11.539 STDOUT terraform:  } 2025-05-17 00:01:11.539625 | orchestrator | 00:01:11.539 STDOUT terraform:  # local_file.inventory will be created 2025-05-17 00:01:11.539704 | orchestrator | 00:01:11.539 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-17 00:01:11.539716 | orchestrator | 00:01:11.539 STDOUT terraform:  + content = (known after apply) 2025-05-17 00:01:11.539726 | orchestrator | 00:01:11.539 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-17 00:01:11.539744 | orchestrator | 00:01:11.539 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-17 00:01:11.539771 | orchestrator | 00:01:11.539 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-17 00:01:11.539790 | orchestrator | 00:01:11.539 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-17 00:01:11.539799 | orchestrator | 00:01:11.539 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-17 00:01:11.539808 | orchestrator | 00:01:11.539 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-17 00:01:11.539820 | orchestrator | 00:01:11.539 STDOUT terraform:  + directory_permission = "0777" 2025-05-17 00:01:11.539829 | orchestrator | 00:01:11.539 STDOUT terraform:  + file_permission = "0644" 2025-05-17 00:01:11.539838 | orchestrator | 00:01:11.539 STDOUT terraform:  + filename = "inventory.ci" 2025-05-17 00:01:11.539850 | orchestrator | 00:01:11.539 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.539891 | orchestrator | 00:01:11.539 STDOUT terraform:  } 2025-05-17 00:01:11.539905 | orchestrator | 00:01:11.539 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-17 00:01:11.539979 | orchestrator | 00:01:11.539 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-17 00:01:11.539991 | orchestrator | 00:01:11.539 STDOUT terraform:  + content = (sensitive value) 2025-05-17 00:01:11.540003 | orchestrator | 00:01:11.539 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-17 00:01:11.540127 | orchestrator | 00:01:11.539 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-17 00:01:11.540145 | orchestrator | 00:01:11.540 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-17 00:01:11.540151 | orchestrator | 00:01:11.540 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-17 00:01:11.540159 | orchestrator | 00:01:11.540 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-17 00:01:11.540197 | orchestrator | 00:01:11.540 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-17 00:01:11.540205 | orchestrator | 00:01:11.540 STDOUT terraform:  + directory_permission = "0700" 2025-05-17 00:01:11.540252 | orchestrator | 00:01:11.540 STDOUT terraform:  + file_permission = "0600" 2025-05-17 00:01:11.540286 | orchestrator | 00:01:11.540 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-17 00:01:11.540318 | orchestrator | 00:01:11.540 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.540326 | orchestrator | 00:01:11.540 STDOUT terraform:  } 2025-05-17 00:01:11.540369 | orchestrator | 00:01:11.540 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-17 00:01:11.540397 | orchestrator | 00:01:11.540 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-17 00:01:11.540419 | orchestrator | 00:01:11.540 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.540434 | orchestrator | 00:01:11.540 STDOUT terraform:  } 2025-05-17 00:01:11.540507 | orchestrator | 00:01:11.540 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-17 00:01:11.540561 | orchestrator | 00:01:11.540 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-17 00:01:11.540624 | orchestrator | 00:01:11.540 STDOUT terraform:  + attachment = (known after apply) 2025-05-17 00:01:11.540638 | orchestrator | 00:01:11.540 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.540681 | orchestrator | 00:01:11.540 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.540705 | orchestrator | 00:01:11.540 STDOUT terraform:  + image_id = (known after apply) 2025-05-17 00:01:11.545791 | orchestrator | 00:01:11.540 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.545838 | orchestrator | 00:01:11.545 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-17 00:01:11.545885 | orchestrator | 00:01:11.545 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.545910 | orchestrator | 00:01:11.545 STDOUT terraform:  + size = 80 2025-05-17 00:01:11.545941 | orchestrator | 00:01:11.545 STDOUT terraform:  + volume_type = "ssd" 2025-05-17 00:01:11.545948 | orchestrator | 00:01:11.545 STDOUT terraform:  } 2025-05-17 00:01:11.546023 | orchestrator | 00:01:11.545 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-17 00:01:11.546088 | orchestrator | 00:01:11.546 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-17 00:01:11.546124 | orchestrator | 00:01:11.546 STDOUT terraform:  + attachment = (known after apply) 2025-05-17 00:01:11.546147 | orchestrator | 00:01:11.546 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.546198 | orchestrator | 00:01:11.546 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.546224 | orchestrator | 00:01:11.546 STDOUT terraform:  + image_id = (known after apply) 2025-05-17 00:01:11.546267 | orchestrator | 00:01:11.546 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.546309 | orchestrator | 00:01:11.546 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-17 00:01:11.546364 | orchestrator | 00:01:11.546 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.546398 | orchestrator | 00:01:11.546 STDOUT terraform:  + size = 80 2025-05-17 00:01:11.546441 | orchestrator | 00:01:11.546 STDOUT terraform:  + volume_type = "ssd" 2025-05-17 00:01:11.546520 | orchestrator | 00:01:11.546 STDOUT terraform:  } 2025-05-17 00:01:11.546576 | orchestrator | 00:01:11.546 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-17 00:01:11.546627 | orchestrator | 00:01:11.546 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-17 00:01:11.546660 | orchestrator | 00:01:11.546 STDOUT terraform:  + attachment = (known after apply) 2025-05-17 00:01:11.546686 | orchestrator | 00:01:11.546 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.546725 | orchestrator | 00:01:11.546 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.546759 | orchestrator | 00:01:11.546 STDOUT terraform:  + image_id = (known after apply) 2025-05-17 00:01:11.546789 | orchestrator | 00:01:11.546 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.546851 | orchestrator | 00:01:11.546 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-17 00:01:11.546899 | orchestrator | 00:01:11.546 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.546927 | orchestrator | 00:01:11.546 STDOUT terraform:  + size = 80 2025-05-17 00:01:11.546993 | orchestrator | 00:01:11.546 STDOUT terraform:  + volume_type = "ssd" 2025-05-17 00:01:11.547012 | orchestrator | 00:01:11.546 STDOUT terraform:  } 2025-05-17 00:01:11.547062 | orchestrator | 00:01:11.547 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-17 00:01:11.547115 | orchestrator | 00:01:11.547 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-17 00:01:11.547161 | orchestrator | 00:01:11.547 STDOUT terraform:  + attachment = (known after apply) 2025-05-17 00:01:11.547188 | orchestrator | 00:01:11.547 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.547221 | orchestrator | 00:01:11.547 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.547256 | orchestrator | 00:01:11.547 STDOUT terraform:  + image_id = (known after apply) 2025-05-17 00:01:11.547289 | orchestrator | 00:01:11.547 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.547344 | orchestrator | 00:01:11.547 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-17 00:01:11.547383 | orchestrator | 00:01:11.547 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.547434 | orchestrator | 00:01:11.547 STDOUT terraform:  + size = 80 2025-05-17 00:01:11.547458 | orchestrator | 00:01:11.547 STDOUT terraform:  + volume_type = "ssd" 2025-05-17 00:01:11.547487 | orchestrator | 00:01:11.547 STDOUT terraform:  } 2025-05-17 00:01:11.547555 | orchestrator | 00:01:11.547 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-17 00:01:11.547605 | orchestrator | 00:01:11.547 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-17 00:01:11.547648 | orchestrator | 00:01:11.547 STDOUT terraform:  + attachment = (known after apply) 2025-05-17 00:01:11.547670 | orchestrator | 00:01:11.547 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.547719 | orchestrator | 00:01:11.547 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.547757 | orchestrator | 00:01:11.547 STDOUT terraform:  + image_id = (known after apply) 2025-05-17 00:01:11.547806 | orchestrator | 00:01:11.547 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.547848 | orchestrator | 00:01:11.547 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-17 00:01:11.547883 | orchestrator | 00:01:11.547 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.547905 | orchestrator | 00:01:11.547 STDOUT terraform:  + size = 80 2025-05-17 00:01:11.547937 | orchestrator | 00:01:11.547 STDOUT terraform:  + volume_type = "ssd" 2025-05-17 00:01:11.547954 | orchestrator | 00:01:11.547 STDOUT terraform:  } 2025-05-17 00:01:11.548005 | orchestrator | 00:01:11.547 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-17 00:01:11.548121 | orchestrator | 00:01:11.548 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-17 00:01:11.548137 | orchestrator | 00:01:11.548 STDOUT terraform:  + attachment = (known after apply) 2025-05-17 00:01:11.548142 | orchestrator | 00:01:11.548 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.548148 | orchestrator | 00:01:11.548 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.549054 | orchestrator | 00:01:11.548 STDOUT terraform:  + image_id = (known after apply) 2025-05-17 00:01:11.549078 | orchestrator | 00:01:11.549 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.549113 | orchestrator | 00:01:11.549 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-17 00:01:11.549147 | orchestrator | 00:01:11.549 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.549174 | orchestrator | 00:01:11.549 STDOUT terraform:  + size = 80 2025-05-17 00:01:11.549197 | orchestrator | 00:01:11.549 STDOUT terraform:  + volume_type = "ssd" 2025-05-17 00:01:11.549214 | orchestrator | 00:01:11.549 STDOUT terraform:  } 2025-05-17 00:01:11.549273 | orchestrator | 00:01:11.549 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-17 00:01:11.549326 | orchestrator | 00:01:11.549 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-17 00:01:11.549374 | orchestrator | 00:01:11.549 STDOUT terraform:  + attachment = (known after apply) 2025-05-17 00:01:11.549401 | orchestrator | 00:01:11.549 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.549428 | orchestrator | 00:01:11.549 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.549481 | orchestrator | 00:01:11.549 STDOUT terraform:  + image_id = (known after apply) 2025-05-17 00:01:11.549499 | orchestrator | 00:01:11.549 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.549569 | orchestrator | 00:01:11.549 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-17 00:01:11.549857 | orchestrator | 00:01:11.549 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.549866 | orchestrator | 00:01:11.549 STDOUT terraform:  + size = 80 2025-05-17 00:01:11.549871 | orchestrator | 00:01:11.549 STDOUT terraform:  + volume_type = "ssd" 2025-05-17 00:01:11.550085 | orchestrator | 00:01:11.550 STDOUT terraform:  } 2025-05-17 00:01:11.550149 | orchestrator | 00:01:11.550 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-17 00:01:11.550211 | orchestrator | 00:01:11.550 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-17 00:01:11.550269 | orchestrator | 00:01:11.550 STDOUT terraform:  + attachment = (known after apply) 2025-05-17 00:01:11.550304 | orchestrator | 00:01:11.550 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.550369 | orchestrator | 00:01:11.550 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.550634 | orchestrator | 00:01:11.550 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.552227 | orchestrator | 00:01:11.550 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-17 00:01:11.552282 | orchestrator | 00:01:11.552 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.552324 | orchestrator | 00:01:11.552 STDOUT terraform:  + size = 20 2025-05-17 00:01:11.552354 | orchestrator | 00:01:11.552 STDOUT terraform:  + volume_type = "ssd" 2025-05-17 00:01:11.552377 | orchestrator | 00:01:11.552 STDOUT terraform:  } 2025-05-17 00:01:11.552435 | orchestrator | 00:01:11.552 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-17 00:01:11.552527 | orchestrator | 00:01:11.552 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-17 00:01:11.552573 | orchestrator | 00:01:11.552 STDOUT terraform:  + attachment = (known after apply) 2025-05-17 00:01:11.552603 | orchestrator | 00:01:11.552 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.552642 | orchestrator | 00:01:11.552 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.552683 | orchestrator | 00:01:11.552 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.552729 | orchestrator | 00:01:11.552 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-17 00:01:11.552768 | orchestrator | 00:01:11.552 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.552796 | orchestrator | 00:01:11.552 STDOUT terraform:  + size = 20 2025-05-17 00:01:11.552831 | orchestrator | 00:01:11.552 STDOUT terraform:  + volume_type = "ssd" 2025-05-17 00:01:11.552853 | orchestrator | 00:01:11.552 STDOUT terraform:  } 2025-05-17 00:01:11.552905 | orchestrator | 00:01:11.552 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-17 00:01:11.552957 | orchestrator | 00:01:11.552 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-17 00:01:11.552996 | orchestrator | 00:01:11.552 STDOUT terraform:  + attachment = (known after apply) 2025-05-17 00:01:11.553024 | orchestrator | 00:01:11.553 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.553064 | orchestrator | 00:01:11.553 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.553102 | orchestrator | 00:01:11.553 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.553148 | orchestrator | 00:01:11.553 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-17 00:01:11.553189 | orchestrator | 00:01:11.553 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.553217 | orchestrator | 00:01:11.553 STDOUT terraform:  + size = 20 2025-05-17 00:01:11.553246 | orchestrator | 00:01:11.553 STDOUT terraform:  + volume_type = "ssd" 2025-05-17 00:01:11.553267 | orchestrator | 00:01:11.553 STDOUT terraform:  } 2025-05-17 00:01:11.553325 | orchestrator | 00:01:11.553 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-17 00:01:11.553378 | orchestrator | 00:01:11.553 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-17 00:01:11.553417 | orchestrator | 00:01:11.553 STDOUT terraform:  + attachment = (known after apply) 2025-05-17 00:01:11.553445 | orchestrator | 00:01:11.553 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.553498 | orchestrator | 00:01:11.553 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.553544 | orchestrator | 00:01:11.553 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.553596 | orchestrator | 00:01:11.553 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-17 00:01:11.553638 | orchestrator | 00:01:11.553 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.553666 | orchestrator | 00:01:11.553 STDOUT terraform:  + size = 20 2025-05-17 00:01:11.553696 | orchestrator | 00:01:11.553 STDOUT terraform:  + volume_type = "ssd" 2025-05-17 00:01:11.553717 | orchestrator | 00:01:11.553 STDOUT terraform:  } 2025-05-17 00:01:11.553767 | orchestrator | 00:01:11.553 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-17 00:01:11.553816 | orchestrator | 00:01:11.553 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-17 00:01:11.553854 | orchestrator | 00:01:11.553 STDOUT terraform:  + attachment = (known after apply) 2025-05-17 00:01:11.553883 | orchestrator | 00:01:11.553 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.553926 | orchestrator | 00:01:11.553 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.553987 | orchestrator | 00:01:11.553 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.554066 | orchestrator | 00:01:11.554 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-17 00:01:11.554112 | orchestrator | 00:01:11.554 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.554144 | orchestrator | 00:01:11.554 STDOUT terraform:  + size = 20 2025-05-17 00:01:11.554176 | orchestrator | 00:01:11.554 STDOUT terraform:  + volume_type = "ssd" 2025-05-17 00:01:11.554199 | orchestrator | 00:01:11.554 STDOUT terraform:  } 2025-05-17 00:01:11.554250 | orchestrator | 00:01:11.554 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-17 00:01:11.554301 | orchestrator | 00:01:11.554 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-17 00:01:11.554342 | orchestrator | 00:01:11.554 STDOUT terraform:  + attachment = (known after apply) 2025-05-17 00:01:11.554372 | orchestrator | 00:01:11.554 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.554413 | orchestrator | 00:01:11.554 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.554454 | orchestrator | 00:01:11.554 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.554524 | orchestrator | 00:01:11.554 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-17 00:01:11.554567 | orchestrator | 00:01:11.554 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.554596 | orchestrator | 00:01:11.554 STDOUT terraform:  + size = 20 2025-05-17 00:01:11.554627 | orchestrator | 00:01:11.554 STDOUT terraform:  + volume_type = "ssd" 2025-05-17 00:01:11.554650 | orchestrator | 00:01:11.554 STDOUT terraform:  } 2025-05-17 00:01:11.554702 | orchestrator | 00:01:11.554 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-17 00:01:11.554753 | orchestrator | 00:01:11.554 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-17 00:01:11.554801 | orchestrator | 00:01:11.554 STDOUT terraform:  + attachment = (known after apply) 2025-05-17 00:01:11.554830 | orchestrator | 00:01:11.554 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.554869 | orchestrator | 00:01:11.554 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.554907 | orchestrator | 00:01:11.554 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.554952 | orchestrator | 00:01:11.554 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-17 00:01:11.554991 | orchestrator | 00:01:11.554 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.555019 | orchestrator | 00:01:11.554 STDOUT terraform:  + size = 20 2025-05-17 00:01:11.555049 | orchestrator | 00:01:11.555 STDOUT terraform:  + volume_type = "ssd" 2025-05-17 00:01:11.555071 | orchestrator | 00:01:11.555 STDOUT terraform:  } 2025-05-17 00:01:11.555124 | orchestrator | 00:01:11.555 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-17 00:01:11.555178 | orchestrator | 00:01:11.555 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-17 00:01:11.555218 | orchestrator | 00:01:11.555 STDOUT terraform:  + attachment = (known after apply) 2025-05-17 00:01:11.555246 | orchestrator | 00:01:11.555 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.555285 | orchestrator | 00:01:11.555 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.555323 | orchestrator | 00:01:11.555 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.555368 | orchestrator | 00:01:11.555 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-17 00:01:11.555407 | orchestrator | 00:01:11.555 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.555442 | orchestrator | 00:01:11.555 STDOUT terraform:  + size = 20 2025-05-17 00:01:11.555491 | orchestrator | 00:01:11.555 STDOUT terraform:  + volume_type = "ssd" 2025-05-17 00:01:11.555515 | orchestrator | 00:01:11.555 STDOUT terraform:  } 2025-05-17 00:01:11.555567 | orchestrator | 00:01:11.555 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-17 00:01:11.555622 | orchestrator | 00:01:11.555 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-17 00:01:11.555662 | orchestrator | 00:01:11.555 STDOUT terraform:  + attachment = (known after apply) 2025-05-17 00:01:11.555690 | orchestrator | 00:01:11.555 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.555730 | orchestrator | 00:01:11.555 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.555768 | orchestrator | 00:01:11.555 STDOUT terraform:  + metadata = (known after apply) 2025-05-17 00:01:11.555816 | orchestrator | 00:01:11.555 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-17 00:01:11.555855 | orchestrator | 00:01:11.555 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.555881 | orchestrator | 00:01:11.555 STDOUT terraform:  + size = 20 2025-05-17 00:01:11.555908 | orchestrator | 00:01:11.555 STDOUT terraform:  + volume_type = "ssd" 2025-05-17 00:01:11.555933 | orchestrator | 00:01:11.555 STDOUT terraform:  } 2025-05-17 00:01:11.555982 | orchestrator | 00:01:11.555 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-17 00:01:11.556032 | orchestrator | 00:01:11.555 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-17 00:01:11.556073 | orchestrator | 00:01:11.556 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-17 00:01:11.556113 | orchestrator | 00:01:11.556 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-17 00:01:11.556156 | orchestrator | 00:01:11.556 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-17 00:01:11.556199 | orchestrator | 00:01:11.556 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.556230 | orchestrator | 00:01:11.556 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.556257 | orchestrator | 00:01:11.556 STDOUT terraform:  + config_drive = true 2025-05-17 00:01:11.556299 | orchestrator | 00:01:11.556 STDOUT terraform:  + created = (known after apply) 2025-05-17 00:01:11.556340 | orchestrator | 00:01:11.556 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-17 00:01:11.556377 | orchestrator | 00:01:11.556 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-17 00:01:11.556407 | orchestrator | 00:01:11.556 STDOUT terraform:  + force_delete = false 2025-05-17 00:01:11.556448 | orchestrator | 00:01:11.556 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.556505 | orchestrator | 00:01:11.556 STDOUT terraform:  + image_id = (known after apply) 2025-05-17 00:01:11.556548 | orchestrator | 00:01:11.556 STDOUT terraform:  + image_name = (known after apply) 2025-05-17 00:01:11.556581 | orchestrator | 00:01:11.556 STDOUT terraform:  + key_pair = "testbed" 2025-05-17 00:01:11.556621 | orchestrator | 00:01:11.556 STDOUT terraform:  + name = "testbed-manager" 2025-05-17 00:01:11.556653 | orchestrator | 00:01:11.556 STDOUT terraform:  + power_state = "active" 2025-05-17 00:01:11.556696 | orchestrator | 00:01:11.556 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.556739 | orchestrator | 00:01:11.556 STDOUT terraform:  + security_groups = (known after apply) 2025-05-17 00:01:11.556773 | orchestrator | 00:01:11.556 STDOUT terraform:  + stop_before_destroy = false 2025-05-17 00:01:11.556815 | orchestrator | 00:01:11.556 STDOUT terraform:  + updated = (known after apply) 2025-05-17 00:01:11.556858 | orchestrator | 00:01:11.556 STDOUT terraform:  + user_data = (known after apply) 2025-05-17 00:01:11.556883 | orchestrator | 00:01:11.556 STDOUT terraform:  + block_device { 2025-05-17 00:01:11.556917 | orchestrator | 00:01:11.556 STDOUT terraform:  + boot_index = 0 2025-05-17 00:01:11.556953 | orchestrator | 00:01:11.556 STDOUT terraform:  + delete_on_termination = false 2025-05-17 00:01:11.556990 | orchestrator | 00:01:11.556 STDOUT terraform:  + destination_type = "volume" 2025-05-17 00:01:11.557026 | orchestrator | 00:01:11.556 STDOUT terraform:  + multiattach = false 2025-05-17 00:01:11.557069 | orchestrator | 00:01:11.557 STDOUT terraform:  + source_type = "volume" 2025-05-17 00:01:11.557121 | orchestrator | 00:01:11.557 STDOUT terraform:  + uuid = (known after apply) 2025-05-17 00:01:11.557144 | orchestrator | 00:01:11.557 STDOUT terraform:  } 2025-05-17 00:01:11.557172 | orchestrator | 00:01:11.557 STDOUT terraform:  + network { 2025-05-17 00:01:11.557201 | orchestrator | 00:01:11.557 STDOUT terraform:  + access_network = false 2025-05-17 00:01:11.557240 | orchestrator | 00:01:11.557 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-17 00:01:11.557278 | orchestrator | 00:01:11.557 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-17 00:01:11.557329 | orchestrator | 00:01:11.557 STDOUT terraform:  + mac = (known after apply) 2025-05-17 00:01:11.557374 | orchestrator | 00:01:11.557 STDOUT terraform:  + name = (known after apply) 2025-05-17 00:01:11.557413 | orchestrator | 00:01:11.557 STDOUT terraform:  + port = (known after apply) 2025-05-17 00:01:11.557452 | orchestrator | 00:01:11.557 STDOUT terraform:  + uuid = (known after apply) 2025-05-17 00:01:11.557491 | orchestrator | 00:01:11.557 STDOUT terraform:  } 2025-05-17 00:01:11.557514 | orchestrator | 00:01:11.557 STDOUT terraform:  } 2025-05-17 00:01:11.557565 | orchestrator | 00:01:11.557 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-17 00:01:11.557614 | orchestrator | 00:01:11.557 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-17 00:01:11.557656 | orchestrator | 00:01:11.557 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-17 00:01:11.557699 | orchestrator | 00:01:11.557 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-17 00:01:11.557741 | orchestrator | 00:01:11.557 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-17 00:01:11.557783 | orchestrator | 00:01:11.557 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.557814 | orchestrator | 00:01:11.557 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.557848 | orchestrator | 00:01:11.557 STDOUT terraform:  + config_drive = true 2025-05-17 00:01:11.557890 | orchestrator | 00:01:11.557 STDOUT terraform:  + created = (known after apply) 2025-05-17 00:01:11.557934 | orchestrator | 00:01:11.557 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-17 00:01:11.557971 | orchestrator | 00:01:11.557 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-17 00:01:11.558004 | orchestrator | 00:01:11.557 STDOUT terraform:  + force_delete = false 2025-05-17 00:01:11.558065 | orchestrator | 00:01:11.558 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.558109 | orchestrator | 00:01:11.558 STDOUT terraform:  + image_id = (known after apply) 2025-05-17 00:01:11.558152 | orchestrator | 00:01:11.558 STDOUT terraform:  + image_name = (known after apply) 2025-05-17 00:01:11.558184 | orchestrator | 00:01:11.558 STDOUT terraform:  + key_pair = "testbed" 2025-05-17 00:01:11.558225 | orchestrator | 00:01:11.558 STDOUT terraform:  + name = "testbed-node-0" 2025-05-17 00:01:11.558258 | orchestrator | 00:01:11.558 STDOUT terraform:  + power_state = "active" 2025-05-17 00:01:11.558305 | orchestrator | 00:01:11.558 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.558347 | orchestrator | 00:01:11.558 STDOUT terraform:  + security_groups = (known after apply) 2025-05-17 00:01:11.558378 | orchestrator | 00:01:11.558 STDOUT terraform:  + stop_before_destroy = false 2025-05-17 00:01:11.558422 | orchestrator | 00:01:11.558 STDOUT terraform:  + updated = (known after apply) 2025-05-17 00:01:11.558538 | orchestrator | 00:01:11.558 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-17 00:01:11.558567 | orchestrator | 00:01:11.558 STDOUT terraform:  + block_device { 2025-05-17 00:01:11.558599 | orchestrator | 00:01:11.558 STDOUT terraform:  + boot_index = 0 2025-05-17 00:01:11.558636 | orchestrator | 00:01:11.558 STDOUT terraform:  + delete_on_termination = false 2025-05-17 00:01:11.558673 | orchestrator | 00:01:11.558 STDOUT terraform:  + destination_type = "volume" 2025-05-17 00:01:11.558710 | orchestrator | 00:01:11.558 STDOUT terraform:  + multiattach = false 2025-05-17 00:01:11.558748 | orchestrator | 00:01:11.558 STDOUT terraform:  + source_type = "volume" 2025-05-17 00:01:11.558793 | orchestrator | 00:01:11.558 STDOUT terraform:  + uuid = (known after apply) 2025-05-17 00:01:11.558815 | orchestrator | 00:01:11.558 STDOUT terraform:  } 2025-05-17 00:01:11.558838 | orchestrator | 00:01:11.558 STDOUT terraform:  + network { 2025-05-17 00:01:11.558867 | orchestrator | 00:01:11.558 STDOUT terraform:  + access_network = false 2025-05-17 00:01:11.558905 | orchestrator | 00:01:11.558 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-17 00:01:11.558944 | orchestrator | 00:01:11.558 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-17 00:01:11.558986 | orchestrator | 00:01:11.558 STDOUT terraform:  + mac = (known after apply) 2025-05-17 00:01:11.559029 | orchestrator | 00:01:11.558 STDOUT terraform:  + name = (known after apply) 2025-05-17 00:01:11.559069 | orchestrator | 00:01:11.559 STDOUT terraform:  + port = (known after apply) 2025-05-17 00:01:11.559107 | orchestrator | 00:01:11.559 STDOUT terraform:  + uuid = (known after apply) 2025-05-17 00:01:11.559129 | orchestrator | 00:01:11.559 STDOUT terraform:  } 2025-05-17 00:01:11.559150 | orchestrator | 00:01:11.559 STDOUT terraform:  } 2025-05-17 00:01:11.559201 | orchestrator | 00:01:11.559 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-17 00:01:11.559250 | orchestrator | 00:01:11.559 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-17 00:01:11.559292 | orchestrator | 00:01:11.559 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-17 00:01:11.559334 | orchestrator | 00:01:11.559 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-17 00:01:11.559379 | orchestrator | 00:01:11.559 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-17 00:01:11.559422 | orchestrator | 00:01:11.559 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.559452 | orchestrator | 00:01:11.559 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.559514 | orchestrator | 00:01:11.559 STDOUT terraform:  + config_drive = true 2025-05-17 00:01:11.559559 | orchestrator | 00:01:11.559 STDOUT terraform:  + created = (known after apply) 2025-05-17 00:01:11.559601 | orchestrator | 00:01:11.559 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-17 00:01:11.559638 | orchestrator | 00:01:11.559 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-17 00:01:11.559679 | orchestrator | 00:01:11.559 STDOUT terraform:  + force_delete = false 2025-05-17 00:01:11.559727 | orchestrator | 00:01:11.559 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.559770 | orchestrator | 00:01:11.559 STDOUT terraform:  + image_id = (known after apply) 2025-05-17 00:01:11.559812 | orchestrator | 00:01:11.559 STDOUT terraform:  + image_name = (known after apply) 2025-05-17 00:01:11.559843 | orchestrator | 00:01:11.559 STDOUT terraform:  + key_pair = "testbed" 2025-05-17 00:01:11.559880 | orchestrator | 00:01:11.559 STDOUT terraform:  + name = "testbed-node-1" 2025-05-17 00:01:11.559914 | orchestrator | 00:01:11.559 STDOUT terraform:  + power_state = "active" 2025-05-17 00:01:11.559957 | orchestrator | 00:01:11.559 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.559999 | orchestrator | 00:01:11.559 STDOUT terraform:  + security_groups = (known after apply) 2025-05-17 00:01:11.560030 | orchestrator | 00:01:11.560 STDOUT terraform:  + stop_before_destroy = false 2025-05-17 00:01:11.560073 | orchestrator | 00:01:11.560 STDOUT terraform:  + updated = (known after apply) 2025-05-17 00:01:11.560130 | orchestrator | 00:01:11.560 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-17 00:01:11.560156 | orchestrator | 00:01:11.560 STDOUT terraform:  + block_device { 2025-05-17 00:01:11.560188 | orchestrator | 00:01:11.560 STDOUT terraform:  + boot_index = 0 2025-05-17 00:01:11.560223 | orchestrator | 00:01:11.560 STDOUT terraform:  + delete_on_termination = false 2025-05-17 00:01:11.560259 | orchestrator | 00:01:11.560 STDOUT terraform:  + destination_type = "volume" 2025-05-17 00:01:11.560296 | orchestrator | 00:01:11.560 STDOUT terraform:  + multiattach = false 2025-05-17 00:01:11.560332 | orchestrator | 00:01:11.560 STDOUT terraform:  + source_type = "volume" 2025-05-17 00:01:11.560378 | orchestrator | 00:01:11.560 STDOUT terraform:  + uuid = (known after apply) 2025-05-17 00:01:11.560400 | orchestrator | 00:01:11.560 STDOUT terraform:  } 2025-05-17 00:01:11.560422 | orchestrator | 00:01:11.560 STDOUT terraform:  + network { 2025-05-17 00:01:11.560449 | orchestrator | 00:01:11.560 STDOUT terraform:  + access_network = false 2025-05-17 00:01:11.560501 | orchestrator | 00:01:11.560 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-17 00:01:11.560540 | orchestrator | 00:01:11.560 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-17 00:01:11.560579 | orchestrator | 00:01:11.560 STDOUT terraform:  + mac = (known after apply) 2025-05-17 00:01:11.560618 | orchestrator | 00:01:11.560 STDOUT terraform:  + name = (known after apply) 2025-05-17 00:01:11.560661 | orchestrator | 00:01:11.560 STDOUT terraform:  + port = (known after apply) 2025-05-17 00:01:11.560700 | orchestrator | 00:01:11.560 STDOUT terraform:  + uuid = (known after apply) 2025-05-17 00:01:11.560721 | orchestrator | 00:01:11.560 STDOUT terraform:  } 2025-05-17 00:01:11.560743 | orchestrator | 00:01:11.560 STDOUT terraform:  } 2025-05-17 00:01:11.560792 | orchestrator | 00:01:11.560 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-17 00:01:11.560842 | orchestrator | 00:01:11.560 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-17 00:01:11.560884 | orchestrator | 00:01:11.560 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-17 00:01:11.560927 | orchestrator | 00:01:11.560 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-17 00:01:11.560968 | orchestrator | 00:01:11.560 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-17 00:01:11.561010 | orchestrator | 00:01:11.560 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.561044 | orchestrator | 00:01:11.561 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.561073 | orchestrator | 00:01:11.561 STDOUT terraform:  + config_drive = true 2025-05-17 00:01:11.561115 | orchestrator | 00:01:11.561 STDOUT terraform:  + created = (known after apply) 2025-05-17 00:01:11.561159 | orchestrator | 00:01:11.561 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-17 00:01:11.561200 | orchestrator | 00:01:11.561 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-17 00:01:11.561233 | orchestrator | 00:01:11.561 STDOUT terraform:  + force_delete = false 2025-05-17 00:01:11.561276 | orchestrator | 00:01:11.561 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.561319 | orchestrator | 00:01:11.561 STDOUT terraform:  + image_id = (known after apply) 2025-05-17 00:01:11.561361 | orchestrator | 00:01:11.561 STDOUT terraform:  + image_name = (known after apply) 2025-05-17 00:01:11.561393 | orchestrator | 00:01:11.561 STDOUT terraform:  + key_pair = "testbed" 2025-05-17 00:01:11.561431 | orchestrator | 00:01:11.561 STDOUT terraform:  + name = "testbed-node-2" 2025-05-17 00:01:11.561463 | orchestrator | 00:01:11.561 STDOUT terraform:  + power_state = "active" 2025-05-17 00:01:11.561564 | orchestrator | 00:01:11.561 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.561608 | orchestrator | 00:01:11.561 STDOUT terraform:  + security_groups = (known after apply) 2025-05-17 00:01:11.561641 | orchestrator | 00:01:11.561 STDOUT terraform:  + stop_before_destroy = false 2025-05-17 00:01:11.561684 | orchestrator | 00:01:11.561 STDOUT terraform:  + updated = (known after apply) 2025-05-17 00:01:11.561742 | orchestrator | 00:01:11.561 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-17 00:01:11.561770 | orchestrator | 00:01:11.561 STDOUT terraform:  + block_device { 2025-05-17 00:01:11.561803 | orchestrator | 00:01:11.561 STDOUT terraform:  + boot_index = 0 2025-05-17 00:01:11.561840 | orchestrator | 00:01:11.561 STDOUT terraform:  + delete_on_termination = false 2025-05-17 00:01:11.561883 | orchestrator | 00:01:11.561 STDOUT terraform:  + destination_type = "volume" 2025-05-17 00:01:11.561920 | orchestrator | 00:01:11.561 STDOUT terraform:  + multiattach = false 2025-05-17 00:01:11.561958 | orchestrator | 00:01:11.561 STDOUT terraform:  + source_type = "volume" 2025-05-17 00:01:11.562006 | orchestrator | 00:01:11.561 STDOUT terraform:  + uuid = (known after apply) 2025-05-17 00:01:11.562046 | orchestrator | 00:01:11.562 STDOUT terraform:  } 2025-05-17 00:01:11.562071 | orchestrator | 00:01:11.562 STDOUT terraform:  + network { 2025-05-17 00:01:11.562100 | orchestrator | 00:01:11.562 STDOUT terraform:  + access_network = false 2025-05-17 00:01:11.562139 | orchestrator | 00:01:11.562 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-17 00:01:11.562181 | orchestrator | 00:01:11.562 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-17 00:01:11.562221 | orchestrator | 00:01:11.562 STDOUT terraform:  + mac = (known after apply) 2025-05-17 00:01:11.562260 | orchestrator | 00:01:11.562 STDOUT terraform:  + name = (known after apply) 2025-05-17 00:01:11.562301 | orchestrator | 00:01:11.562 STDOUT terraform:  + port = (known after apply) 2025-05-17 00:01:11.562341 | orchestrator | 00:01:11.562 STDOUT terraform:  + uuid = (known after apply) 2025-05-17 00:01:11.562363 | orchestrator | 00:01:11.562 STDOUT terraform:  } 2025-05-17 00:01:11.562385 | orchestrator | 00:01:11.562 STDOUT terraform:  } 2025-05-17 00:01:11.562437 | orchestrator | 00:01:11.562 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-17 00:01:11.562500 | orchestrator | 00:01:11.562 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-17 00:01:11.562545 | orchestrator | 00:01:11.562 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-17 00:01:11.562587 | orchestrator | 00:01:11.562 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-17 00:01:11.562631 | orchestrator | 00:01:11.562 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-17 00:01:11.562673 | orchestrator | 00:01:11.562 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.562706 | orchestrator | 00:01:11.562 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.562737 | orchestrator | 00:01:11.562 STDOUT terraform:  + config_drive = true 2025-05-17 00:01:11.562779 | orchestrator | 00:01:11.562 STDOUT terraform:  + created = (known after apply) 2025-05-17 00:01:11.562822 | orchestrator | 00:01:11.562 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-17 00:01:11.562860 | orchestrator | 00:01:11.562 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-17 00:01:11.562892 | orchestrator | 00:01:11.562 STDOUT terraform:  + force_delete = false 2025-05-17 00:01:11.562934 | orchestrator | 00:01:11.562 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.562977 | orchestrator | 00:01:11.562 STDOUT terraform:  + image_id = (known after apply) 2025-05-17 00:01:11.563031 | orchestrator | 00:01:11.562 STDOUT terraform:  + image_name = (known after apply) 2025-05-17 00:01:11.563071 | orchestrator | 00:01:11.563 STDOUT terraform:  + key_pair = "testbed" 2025-05-17 00:01:11.563108 | orchestrator | 00:01:11.563 STDOUT terraform:  + name = "testbed-node-3" 2025-05-17 00:01:11.563140 | orchestrator | 00:01:11.563 STDOUT terraform:  + power_state = "active" 2025-05-17 00:01:11.563183 | orchestrator | 00:01:11.563 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.563248 | orchestrator | 00:01:11.563 STDOUT terraform:  + security_groups = (known after apply) 2025-05-17 00:01:11.563296 | orchestrator | 00:01:11.563 STDOUT terraform:  + stop_before_destroy = false 2025-05-17 00:01:11.563344 | orchestrator | 00:01:11.563 STDOUT terraform:  + updated = (known after apply) 2025-05-17 00:01:11.563403 | orchestrator | 00:01:11.563 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-17 00:01:11.563433 | orchestrator | 00:01:11.563 STDOUT terraform:  + block_device { 2025-05-17 00:01:11.563501 | orchestrator | 00:01:11.563 STDOUT terraform:  + boot_index = 0 2025-05-17 00:01:11.563542 | orchestrator | 00:01:11.563 STDOUT terraform:  + delete_on_termination = false 2025-05-17 00:01:11.563580 | orchestrator | 00:01:11.563 STDOUT terraform:  + destination_type = "volume" 2025-05-17 00:01:11.563617 | orchestrator | 00:01:11.563 STDOUT terraform:  + multiattach = false 2025-05-17 00:01:11.563654 | orchestrator | 00:01:11.563 STDOUT terraform:  + source_type = "volume" 2025-05-17 00:01:11.563700 | orchestrator | 00:01:11.563 STDOUT terraform:  + uuid = (known after apply) 2025-05-17 00:01:11.563723 | orchestrator | 00:01:11.563 STDOUT terraform:  } 2025-05-17 00:01:11.563748 | orchestrator | 00:01:11.563 STDOUT terraform:  + network { 2025-05-17 00:01:11.563776 | orchestrator | 00:01:11.563 STDOUT terraform:  + access_network = false 2025-05-17 00:01:11.563814 | orchestrator | 00:01:11.563 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-17 00:01:11.563853 | orchestrator | 00:01:11.563 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-17 00:01:11.563892 | orchestrator | 00:01:11.563 STDOUT terraform:  + mac = (known after apply) 2025-05-17 00:01:11.563931 | orchestrator | 00:01:11.563 STDOUT terraform:  + name = (known after apply) 2025-05-17 00:01:11.563971 | orchestrator | 00:01:11.563 STDOUT terraform:  + port = (known after apply) 2025-05-17 00:01:11.564011 | orchestrator | 00:01:11.563 STDOUT terraform:  + uuid = (known after apply) 2025-05-17 00:01:11.564033 | orchestrator | 00:01:11.564 STDOUT terraform:  } 2025-05-17 00:01:11.564054 | orchestrator | 00:01:11.564 STDOUT terraform:  } 2025-05-17 00:01:11.564104 | orchestrator | 00:01:11.564 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-17 00:01:11.564153 | orchestrator | 00:01:11.564 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-17 00:01:11.564194 | orchestrator | 00:01:11.564 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-17 00:01:11.564239 | orchestrator | 00:01:11.564 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-17 00:01:11.564290 | orchestrator | 00:01:11.564 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-17 00:01:11.564334 | orchestrator | 00:01:11.564 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.564365 | orchestrator | 00:01:11.564 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.564394 | orchestrator | 00:01:11.564 STDOUT terraform:  + config_drive = true 2025-05-17 00:01:11.564436 | orchestrator | 00:01:11.564 STDOUT terraform:  + created = (known after apply) 2025-05-17 00:01:11.564493 | orchestrator | 00:01:11.564 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-17 00:01:11.564532 | orchestrator | 00:01:11.564 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-17 00:01:11.564564 | orchestrator | 00:01:11.564 STDOUT terraform:  + force_delete = false 2025-05-17 00:01:11.564606 | orchestrator | 00:01:11.564 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.564649 | orchestrator | 00:01:11.564 STDOUT terraform:  + image_id = (known after apply) 2025-05-17 00:01:11.564701 | orchestrator | 00:01:11.564 STDOUT terraform:  + image_name = (known after apply) 2025-05-17 00:01:11.564734 | orchestrator | 00:01:11.564 STDOUT terraform:  + key_pair = "testbed" 2025-05-17 00:01:11.564771 | orchestrator | 00:01:11.564 STDOUT terraform:  + name = "testbed-node-4" 2025-05-17 00:01:11.564802 | orchestrator | 00:01:11.564 STDOUT terraform:  + power_state = "active" 2025-05-17 00:01:11.564844 | orchestrator | 00:01:11.564 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.564884 | orchestrator | 00:01:11.564 STDOUT terraform:  + security_groups = (known after apply) 2025-05-17 00:01:11.564915 | orchestrator | 00:01:11.564 STDOUT terraform:  + stop_before_destroy = false 2025-05-17 00:01:11.564956 | orchestrator | 00:01:11.564 STDOUT terraform:  + updated = (known after apply) 2025-05-17 00:01:11.565012 | orchestrator | 00:01:11.564 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-17 00:01:11.565037 | orchestrator | 00:01:11.565 STDOUT terraform:  + block_device { 2025-05-17 00:01:11.565068 | orchestrator | 00:01:11.565 STDOUT terraform:  + boot_index = 0 2025-05-17 00:01:11.565103 | orchestrator | 00:01:11.565 STDOUT terraform:  + delete_on_termination = false 2025-05-17 00:01:11.565138 | orchestrator | 00:01:11.565 STDOUT terraform:  + destination_type = "volume" 2025-05-17 00:01:11.565175 | orchestrator | 00:01:11.565 STDOUT terraform:  + multiattach = false 2025-05-17 00:01:11.565212 | orchestrator | 00:01:11.565 STDOUT terraform:  + source_type = "volume" 2025-05-17 00:01:11.565257 | orchestrator | 00:01:11.565 STDOUT terraform:  + uuid = (known after apply) 2025-05-17 00:01:11.565278 | orchestrator | 00:01:11.565 STDOUT terraform:  } 2025-05-17 00:01:11.565300 | orchestrator | 00:01:11.565 STDOUT terraform:  + network { 2025-05-17 00:01:11.565328 | orchestrator | 00:01:11.565 STDOUT terraform:  + access_network = false 2025-05-17 00:01:11.565365 | orchestrator | 00:01:11.565 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-17 00:01:11.565480 | orchestrator | 00:01:11.565 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-17 00:01:11.565523 | orchestrator | 00:01:11.565 STDOUT terraform:  + mac = (known after apply) 2025-05-17 00:01:11.565561 | orchestrator | 00:01:11.565 STDOUT terraform:  + name = (known after apply) 2025-05-17 00:01:11.565599 | orchestrator | 00:01:11.565 STDOUT terraform:  + port = (known after apply) 2025-05-17 00:01:11.565638 | orchestrator | 00:01:11.565 STDOUT terraform:  + uuid = (known after apply) 2025-05-17 00:01:11.565661 | orchestrator | 00:01:11.565 STDOUT terraform:  } 2025-05-17 00:01:11.565682 | orchestrator | 00:01:11.565 STDOUT terraform:  } 2025-05-17 00:01:11.565790 | orchestrator | 00:01:11.565 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-17 00:01:11.565841 | orchestrator | 00:01:11.565 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-17 00:01:11.565884 | orchestrator | 00:01:11.565 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-17 00:01:11.565926 | orchestrator | 00:01:11.565 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-17 00:01:11.565968 | orchestrator | 00:01:11.565 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-17 00:01:11.566009 | orchestrator | 00:01:11.565 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.566060 | orchestrator | 00:01:11.566 STDOUT terraform:  + availability_zone = "nova" 2025-05-17 00:01:11.566094 | orchestrator | 00:01:11.566 STDOUT terraform:  + config_drive = true 2025-05-17 00:01:11.566136 | orchestrator | 00:01:11.566 STDOUT terraform:  + created = (known after apply) 2025-05-17 00:01:11.566178 | orchestrator | 00:01:11.566 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-17 00:01:11.566215 | orchestrator | 00:01:11.566 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-17 00:01:11.566247 | orchestrator | 00:01:11.566 STDOUT terraform:  + force_delete = false 2025-05-17 00:01:11.566290 | orchestrator | 00:01:11.566 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.566332 | orchestrator | 00:01:11.566 STDOUT terraform:  + image_id = (known after apply) 2025-05-17 00:01:11.566373 | orchestrator | 00:01:11.566 STDOUT terraform:  + image_name = (known after apply) 2025-05-17 00:01:11.566404 | orchestrator | 00:01:11.566 STDOUT terraform:  + key_pair = "testbed" 2025-05-17 00:01:11.566441 | orchestrator | 00:01:11.566 STDOUT terraform:  + name = "testbed-node-5" 2025-05-17 00:01:11.566511 | orchestrator | 00:01:11.566 STDOUT terraform:  + power_state = "active" 2025-05-17 00:01:11.566561 | orchestrator | 00:01:11.566 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.566604 | orchestrator | 00:01:11.566 STDOUT terraform:  + security_groups = (known after apply) 2025-05-17 00:01:11.566637 | orchestrator | 00:01:11.566 STDOUT terraform:  + stop_before_destroy = false 2025-05-17 00:01:11.566680 | orchestrator | 00:01:11.566 STDOUT terraform:  + updated = (known after apply) 2025-05-17 00:01:11.566736 | orchestrator | 00:01:11.566 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-17 00:01:11.566766 | orchestrator | 00:01:11.566 STDOUT terraform:  + block_device { 2025-05-17 00:01:11.566798 | orchestrator | 00:01:11.566 STDOUT terraform:  + boot_index = 0 2025-05-17 00:01:11.566834 | orchestrator | 00:01:11.566 STDOUT terraform:  + delete_on_termination = false 2025-05-17 00:01:11.566873 | orchestrator | 00:01:11.566 STDOUT terraform:  + destination_type = "volume" 2025-05-17 00:01:11.566907 | orchestrator | 00:01:11.566 STDOUT terraform:  + multiattach = false 2025-05-17 00:01:11.566944 | orchestrator | 00:01:11.566 STDOUT terraform:  + source_type = "volume" 2025-05-17 00:01:11.566990 | orchestrator | 00:01:11.566 STDOUT terraform:  + uuid = (known after apply) 2025-05-17 00:01:11.567010 | orchestrator | 00:01:11.566 STDOUT terraform:  } 2025-05-17 00:01:11.567032 | orchestrator | 00:01:11.567 STDOUT terraform:  + network { 2025-05-17 00:01:11.567059 | orchestrator | 00:01:11.567 STDOUT terraform:  + access_network = false 2025-05-17 00:01:11.567097 | orchestrator | 00:01:11.567 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-17 00:01:11.567137 | orchestrator | 00:01:11.567 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-17 00:01:11.567174 | orchestrator | 00:01:11.567 STDOUT terraform:  + mac = (known after apply) 2025-05-17 00:01:11.567212 | orchestrator | 00:01:11.567 STDOUT terraform:  + name = (known after apply) 2025-05-17 00:01:11.567251 | orchestrator | 00:01:11.567 STDOUT terraform:  + port = (known after apply) 2025-05-17 00:01:11.567289 | orchestrator | 00:01:11.567 STDOUT terraform:  + uuid = (known after apply) 2025-05-17 00:01:11.567309 | orchestrator | 00:01:11.567 STDOUT terraform:  } 2025-05-17 00:01:11.567333 | orchestrator | 00:01:11.567 STDOUT terraform:  } 2025-05-17 00:01:11.567375 | orchestrator | 00:01:11.567 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-17 00:01:11.567460 | orchestrator | 00:01:11.567 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-17 00:01:11.567515 | orchestrator | 00:01:11.567 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-17 00:01:11.567551 | orchestrator | 00:01:11.567 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.567581 | orchestrator | 00:01:11.567 STDOUT terraform:  + name = "testbed" 2025-05-17 00:01:11.567614 | orchestrator | 00:01:11.567 STDOUT terraform:  + private_key = (sensitive value) 2025-05-17 00:01:11.567651 | orchestrator | 00:01:11.567 STDOUT terraform:  + public_key = (known after apply) 2025-05-17 00:01:11.567687 | orchestrator | 00:01:11.567 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.567724 | orchestrator | 00:01:11.567 STDOUT terraform:  + user_id = (known after apply) 2025-05-17 00:01:11.567746 | orchestrator | 00:01:11.567 STDOUT terraform:  } 2025-05-17 00:01:11.567802 | orchestrator | 00:01:11.567 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-17 00:01:11.567858 | orchestrator | 00:01:11.567 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-17 00:01:11.567899 | orchestrator | 00:01:11.567 STDOUT terraform:  + device = (known after apply) 2025-05-17 00:01:11.567937 | orchestrator | 00:01:11.567 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.567973 | orchestrator | 00:01:11.567 STDOUT terraform:  + instance_id = (known after apply) 2025-05-17 00:01:11.568009 | orchestrator | 00:01:11.567 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.568045 | orchestrator | 00:01:11.568 STDOUT terraform:  + volume_id = (known after apply) 2025-05-17 00:01:11.568066 | orchestrator | 00:01:11.568 STDOUT terraform:  } 2025-05-17 00:01:11.568123 | orchestrator | 00:01:11.568 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-17 00:01:11.568178 | orchestrator | 00:01:11.568 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-17 00:01:11.568213 | orchestrator | 00:01:11.568 STDOUT terraform:  + device = (known after apply) 2025-05-17 00:01:11.568249 | orchestrator | 00:01:11.568 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.568284 | orchestrator | 00:01:11.568 STDOUT terraform:  + instance_id = (known after apply) 2025-05-17 00:01:11.568320 | orchestrator | 00:01:11.568 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.568356 | orchestrator | 00:01:11.568 STDOUT terraform:  + volume_id = (known after apply) 2025-05-17 00:01:11.568376 | orchestrator | 00:01:11.568 STDOUT terraform:  } 2025-05-17 00:01:11.568432 | orchestrator | 00:01:11.568 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-17 00:01:11.568502 | orchestrator | 00:01:11.568 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-17 00:01:11.568552 | orchestrator | 00:01:11.568 STDOUT terraform:  + device = (known after apply) 2025-05-17 00:01:11.568595 | orchestrator | 00:01:11.568 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.568632 | orchestrator | 00:01:11.568 STDOUT terraform:  + instance_id = (known after apply) 2025-05-17 00:01:11.568669 | orchestrator | 00:01:11.568 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.568706 | orchestrator | 00:01:11.568 STDOUT terraform:  + volume_id = (known after apply) 2025-05-17 00:01:11.568727 | orchestrator | 00:01:11.568 STDOUT terraform:  } 2025-05-17 00:01:11.568782 | orchestrator | 00:01:11.568 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-17 00:01:11.568839 | orchestrator | 00:01:11.568 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-17 00:01:11.568876 | orchestrator | 00:01:11.568 STDOUT terraform:  + device = (known after apply) 2025-05-17 00:01:11.568913 | orchestrator | 00:01:11.568 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.568948 | orchestrator | 00:01:11.568 STDOUT terraform:  + instance_id = (known after apply) 2025-05-17 00:01:11.568984 | orchestrator | 00:01:11.568 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.569023 | orchestrator | 00:01:11.568 STDOUT terraform:  + volume_id = (known after apply) 2025-05-17 00:01:11.569046 | orchestrator | 00:01:11.569 STDOUT terraform:  } 2025-05-17 00:01:11.569109 | orchestrator | 00:01:11.569 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-17 00:01:11.569163 | orchestrator | 00:01:11.569 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-17 00:01:11.569197 | orchestrator | 00:01:11.569 STDOUT terraform:  + device = (known after apply) 2025-05-17 00:01:11.569232 | orchestrator | 00:01:11.569 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.569266 | orchestrator | 00:01:11.569 STDOUT terraform:  + instance_id = (known after apply) 2025-05-17 00:01:11.569302 | orchestrator | 00:01:11.569 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.569336 | orchestrator | 00:01:11.569 STDOUT terraform:  + volume_id = (known after apply) 2025-05-17 00:01:11.569356 | orchestrator | 00:01:11.569 STDOUT terraform:  } 2025-05-17 00:01:11.569411 | orchestrator | 00:01:11.569 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-17 00:01:11.569493 | orchestrator | 00:01:11.569 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-17 00:01:11.569533 | orchestrator | 00:01:11.569 STDOUT terraform:  + device = (known after apply) 2025-05-17 00:01:11.569569 | orchestrator | 00:01:11.569 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.569605 | orchestrator | 00:01:11.569 STDOUT terraform:  + instance_id = (known after apply) 2025-05-17 00:01:11.569658 | orchestrator | 00:01:11.569 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.569696 | orchestrator | 00:01:11.569 STDOUT terraform:  + volume_id = (known after apply) 2025-05-17 00:01:11.569716 | orchestrator | 00:01:11.569 STDOUT terraform:  } 2025-05-17 00:01:11.569772 | orchestrator | 00:01:11.569 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-17 00:01:11.569827 | orchestrator | 00:01:11.569 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-17 00:01:11.569863 | orchestrator | 00:01:11.569 STDOUT terraform:  + device = (known after apply) 2025-05-17 00:01:11.569898 | orchestrator | 00:01:11.569 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.569934 | orchestrator | 00:01:11.569 STDOUT terraform:  + instance_id = (known after apply) 2025-05-17 00:01:11.569970 | orchestrator | 00:01:11.569 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.570006 | orchestrator | 00:01:11.569 STDOUT terraform:  + volume_id = (known after apply) 2025-05-17 00:01:11.570042 | orchestrator | 00:01:11.570 STDOUT terraform:  } 2025-05-17 00:01:11.570100 | orchestrator | 00:01:11.570 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-17 00:01:11.570156 | orchestrator | 00:01:11.570 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-17 00:01:11.570191 | orchestrator | 00:01:11.570 STDOUT terraform:  + device = (known after apply) 2025-05-17 00:01:11.570228 | orchestrator | 00:01:11.570 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.570262 | orchestrator | 00:01:11.570 STDOUT terraform:  + instance_id = (known after apply) 2025-05-17 00:01:11.570304 | orchestrator | 00:01:11.570 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.570339 | orchestrator | 00:01:11.570 STDOUT terraform:  + volume_id = (known after apply) 2025-05-17 00:01:11.570359 | orchestrator | 00:01:11.570 STDOUT terraform:  } 2025-05-17 00:01:11.570419 | orchestrator | 00:01:11.570 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-17 00:01:11.570491 | orchestrator | 00:01:11.570 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-17 00:01:11.570545 | orchestrator | 00:01:11.570 STDOUT terraform:  + device = (known after apply) 2025-05-17 00:01:11.570584 | orchestrator | 00:01:11.570 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.570620 | orchestrator | 00:01:11.570 STDOUT terraform:  + instance_id = (known after apply) 2025-05-17 00:01:11.570656 | orchestrator | 00:01:11.570 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.570691 | orchestrator | 00:01:11.570 STDOUT terraform:  + volume_id = (known after apply) 2025-05-17 00:01:11.570713 | orchestrator | 00:01:11.570 STDOUT terraform:  } 2025-05-17 00:01:11.570777 | orchestrator | 00:01:11.570 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-17 00:01:11.570843 | orchestrator | 00:01:11.570 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-17 00:01:11.570880 | orchestrator | 00:01:11.570 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-17 00:01:11.570915 | orchestrator | 00:01:11.570 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-17 00:01:11.570950 | orchestrator | 00:01:11.570 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.570985 | orchestrator | 00:01:11.570 STDOUT terraform:  + port_id = (known after apply) 2025-05-17 00:01:11.571022 | orchestrator | 00:01:11.570 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.571042 | orchestrator | 00:01:11.571 STDOUT terraform:  } 2025-05-17 00:01:11.571095 | orchestrator | 00:01:11.571 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-17 00:01:11.571150 | orchestrator | 00:01:11.571 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-17 00:01:11.571183 | orchestrator | 00:01:11.571 STDOUT terraform:  + address = (known after apply) 2025-05-17 00:01:11.571216 | orchestrator | 00:01:11.571 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.571248 | orchestrator | 00:01:11.571 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-17 00:01:11.571280 | orchestrator | 00:01:11.571 STDOUT terraform:  + dns_name = (known after apply) 2025-05-17 00:01:11.571314 | orchestrator | 00:01:11.571 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-17 00:01:11.571348 | orchestrator | 00:01:11.571 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.571376 | orchestrator | 00:01:11.571 STDOUT terraform:  + pool = "public" 2025-05-17 00:01:11.571409 | orchestrator | 00:01:11.571 STDOUT terraform:  + port_id = (known after apply) 2025-05-17 00:01:11.571446 | orchestrator | 00:01:11.571 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.571495 | orchestrator | 00:01:11.571 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-17 00:01:11.571528 | orchestrator | 00:01:11.571 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.571548 | orchestrator | 00:01:11.571 STDOUT terraform:  } 2025-05-17 00:01:11.571599 | orchestrator | 00:01:11.571 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-17 00:01:11.571651 | orchestrator | 00:01:11.571 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-17 00:01:11.571695 | orchestrator | 00:01:11.571 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-17 00:01:11.571741 | orchestrator | 00:01:11.571 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.571771 | orchestrator | 00:01:11.571 STDOUT terraform:  + availability_zone_hints = [ 2025-05-17 00:01:11.571793 | orchestrator | 00:01:11.571 STDOUT terraform:  + "nova", 2025-05-17 00:01:11.571815 | orchestrator | 00:01:11.571 STDOUT terraform:  ] 2025-05-17 00:01:11.571859 | orchestrator | 00:01:11.571 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-17 00:01:11.571902 | orchestrator | 00:01:11.571 STDOUT terraform:  + external = (known after apply) 2025-05-17 00:01:11.571948 | orchestrator | 00:01:11.571 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.571994 | orchestrator | 00:01:11.571 STDOUT terraform:  + mtu = (known after apply) 2025-05-17 00:01:11.572039 | orchestrator | 00:01:11.572 STDOUT terraform:  + name = "net-testbed-management" 2025-05-17 00:01:11.572082 | orchestrator | 00:01:11.572 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-17 00:01:11.572125 | orchestrator | 00:01:11.572 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-17 00:01:11.572169 | orchestrator | 00:01:11.572 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.572212 | orchestrator | 00:01:11.572 STDOUT terraform:  + shared = (known after apply) 2025-05-17 00:01:11.572254 | orchestrator | 00:01:11.572 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.572305 | orchestrator | 00:01:11.572 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-17 00:01:11.572337 | orchestrator | 00:01:11.572 STDOUT terraform:  + segments (known after apply) 2025-05-17 00:01:11.572358 | orchestrator | 00:01:11.572 STDOUT terraform:  } 2025-05-17 00:01:11.572412 | orchestrator | 00:01:11.572 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-17 00:01:11.572490 | orchestrator | 00:01:11.572 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-17 00:01:11.572536 | orchestrator | 00:01:11.572 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-17 00:01:11.572580 | orchestrator | 00:01:11.572 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-17 00:01:11.572621 | orchestrator | 00:01:11.572 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-17 00:01:11.572665 | orchestrator | 00:01:11.572 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.572712 | orchestrator | 00:01:11.572 STDOUT terraform:  + device_id = (known after apply) 2025-05-17 00:01:11.572754 | orchestrator | 00:01:11.572 STDOUT terraform:  + device_owner = (known after apply) 2025-05-17 00:01:11.572798 | orchestrator | 00:01:11.572 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-17 00:01:11.572841 | orchestrator | 00:01:11.572 STDOUT terraform:  + dns_name = (known after apply) 2025-05-17 00:01:11.572886 | orchestrator | 00:01:11.572 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.572929 | orchestrator | 00:01:11.572 STDOUT terraform:  + mac_address = (known after apply) 2025-05-17 00:01:11.572973 | orchestrator | 00:01:11.572 STDOUT terraform:  + network_id = (known after apply) 2025-05-17 00:01:11.573017 | orchestrator | 00:01:11.572 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-17 00:01:11.573060 | orchestrator | 00:01:11.573 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-17 00:01:11.573104 | orchestrator | 00:01:11.573 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.573149 | orchestrator | 00:01:11.573 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-17 00:01:11.573192 | orchestrator | 00:01:11.573 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.573220 | orchestrator | 00:01:11.573 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.573256 | orchestrator | 00:01:11.573 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-17 00:01:11.573277 | orchestrator | 00:01:11.573 STDOUT terraform:  } 2025-05-17 00:01:11.573304 | orchestrator | 00:01:11.573 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.573340 | orchestrator | 00:01:11.573 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-17 00:01:11.573361 | orchestrator | 00:01:11.573 STDOUT terraform:  } 2025-05-17 00:01:11.573395 | orchestrator | 00:01:11.573 STDOUT terraform:  + binding (known after apply) 2025-05-17 00:01:11.573417 | orchestrator | 00:01:11.573 STDOUT terraform:  + fixed_ip { 2025-05-17 00:01:11.573449 | orchestrator | 00:01:11.573 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-17 00:01:11.573500 | orchestrator | 00:01:11.573 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-17 00:01:11.573522 | orchestrator | 00:01:11.573 STDOUT terraform:  } 2025-05-17 00:01:11.573544 | orchestrator | 00:01:11.573 STDOUT terraform:  } 2025-05-17 00:01:11.573596 | orchestrator | 00:01:11.573 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-17 00:01:11.573648 | orchestrator | 00:01:11.573 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-17 00:01:11.573692 | orchestrator | 00:01:11.573 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-17 00:01:11.573735 | orchestrator | 00:01:11.573 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-17 00:01:11.573777 | orchestrator | 00:01:11.573 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-17 00:01:11.573821 | orchestrator | 00:01:11.573 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.573868 | orchestrator | 00:01:11.573 STDOUT terraform:  + device_id = (known after apply) 2025-05-17 00:01:11.573919 | orchestrator | 00:01:11.573 STDOUT terraform:  + device_owner = (known after apply) 2025-05-17 00:01:11.573968 | orchestrator | 00:01:11.573 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-17 00:01:11.574028 | orchestrator | 00:01:11.573 STDOUT terraform:  + dns_name = (known after apply) 2025-05-17 00:01:11.574075 | orchestrator | 00:01:11.574 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.574117 | orchestrator | 00:01:11.574 STDOUT terraform:  + mac_address = (known after apply) 2025-05-17 00:01:11.574170 | orchestrator | 00:01:11.574 STDOUT terraform:  + network_id = (known after apply) 2025-05-17 00:01:11.574217 | orchestrator | 00:01:11.574 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-17 00:01:11.574261 | orchestrator | 00:01:11.574 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-17 00:01:11.574305 | orchestrator | 00:01:11.574 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.574347 | orchestrator | 00:01:11.574 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-17 00:01:11.574391 | orchestrator | 00:01:11.574 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.574419 | orchestrator | 00:01:11.574 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.574455 | orchestrator | 00:01:11.574 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-17 00:01:11.574508 | orchestrator | 00:01:11.574 STDOUT terraform:  } 2025-05-17 00:01:11.574538 | orchestrator | 00:01:11.574 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.574576 | orchestrator | 00:01:11.574 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-17 00:01:11.574598 | orchestrator | 00:01:11.574 STDOUT terraform:  } 2025-05-17 00:01:11.574626 | orchestrator | 00:01:11.574 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.574662 | orchestrator | 00:01:11.574 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-17 00:01:11.574684 | orchestrator | 00:01:11.574 STDOUT terraform:  } 2025-05-17 00:01:11.574729 | orchestrator | 00:01:11.574 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.574770 | orchestrator | 00:01:11.574 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-17 00:01:11.574792 | orchestrator | 00:01:11.574 STDOUT terraform:  } 2025-05-17 00:01:11.574823 | orchestrator | 00:01:11.574 STDOUT terraform:  + binding (known after apply) 2025-05-17 00:01:11.574845 | orchestrator | 00:01:11.574 STDOUT terraform:  + fixed_ip { 2025-05-17 00:01:11.574879 | orchestrator | 00:01:11.574 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-17 00:01:11.574918 | orchestrator | 00:01:11.574 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-17 00:01:11.574940 | orchestrator | 00:01:11.574 STDOUT terraform:  } 2025-05-17 00:01:11.574960 | orchestrator | 00:01:11.574 STDOUT terraform:  } 2025-05-17 00:01:11.575013 | orchestrator | 00:01:11.574 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-17 00:01:11.575074 | orchestrator | 00:01:11.575 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-17 00:01:11.575120 | orchestrator | 00:01:11.575 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-17 00:01:11.575166 | orchestrator | 00:01:11.575 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-17 00:01:11.575208 | orchestrator | 00:01:11.575 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-17 00:01:11.575252 | orchestrator | 00:01:11.575 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.575297 | orchestrator | 00:01:11.575 STDOUT terraform:  + device_id = (known after apply) 2025-05-17 00:01:11.575339 | orchestrator | 00:01:11.575 STDOUT terraform:  + device_owner = (known after apply) 2025-05-17 00:01:11.575382 | orchestrator | 00:01:11.575 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-17 00:01:11.575425 | orchestrator | 00:01:11.575 STDOUT terraform:  + dns_name = (known after apply) 2025-05-17 00:01:11.575484 | orchestrator | 00:01:11.575 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.575542 | orchestrator | 00:01:11.575 STDOUT terraform:  + mac_address = (known after apply) 2025-05-17 00:01:11.575587 | orchestrator | 00:01:11.575 STDOUT terraform:  + network_id = (known after apply) 2025-05-17 00:01:11.575630 | orchestrator | 00:01:11.575 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-17 00:01:11.575675 | orchestrator | 00:01:11.575 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-17 00:01:11.575718 | orchestrator | 00:01:11.575 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.575759 | orchestrator | 00:01:11.575 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-17 00:01:11.575803 | orchestrator | 00:01:11.575 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.575832 | orchestrator | 00:01:11.575 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.575870 | orchestrator | 00:01:11.575 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-17 00:01:11.575890 | orchestrator | 00:01:11.575 STDOUT terraform:  } 2025-05-17 00:01:11.575917 | orchestrator | 00:01:11.575 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.575953 | orchestrator | 00:01:11.575 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-17 00:01:11.575976 | orchestrator | 00:01:11.575 STDOUT terraform:  } 2025-05-17 00:01:11.576003 | orchestrator | 00:01:11.575 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.576044 | orchestrator | 00:01:11.576 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-17 00:01:11.576066 | orchestrator | 00:01:11.576 STDOUT terraform:  } 2025-05-17 00:01:11.576093 | orchestrator | 00:01:11.576 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.576129 | orchestrator | 00:01:11.576 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-17 00:01:11.576150 | orchestrator | 00:01:11.576 STDOUT terraform:  } 2025-05-17 00:01:11.576180 | orchestrator | 00:01:11.576 STDOUT terraform:  + binding (known after apply) 2025-05-17 00:01:11.576209 | orchestrator | 00:01:11.576 STDOUT terraform:  + fixed_ip { 2025-05-17 00:01:11.576243 | orchestrator | 00:01:11.576 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-17 00:01:11.576281 | orchestrator | 00:01:11.576 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-17 00:01:11.576303 | orchestrator | 00:01:11.576 STDOUT terraform:  } 2025-05-17 00:01:11.576324 | orchestrator | 00:01:11.576 STDOUT terraform:  } 2025-05-17 00:01:11.576381 | orchestrator | 00:01:11.576 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-17 00:01:11.576444 | orchestrator | 00:01:11.576 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-17 00:01:11.576507 | orchestrator | 00:01:11.576 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-17 00:01:11.576553 | orchestrator | 00:01:11.576 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-17 00:01:11.576596 | orchestrator | 00:01:11.576 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-17 00:01:11.576639 | orchestrator | 00:01:11.576 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.576682 | orchestrator | 00:01:11.576 STDOUT terraform:  + device_id = (known after apply) 2025-05-17 00:01:11.576728 | orchestrator | 00:01:11.576 STDOUT terraform:  + device_owner = (known after apply) 2025-05-17 00:01:11.576770 | orchestrator | 00:01:11.576 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-17 00:01:11.576814 | orchestrator | 00:01:11.576 STDOUT terraform:  + dns_name = (known after apply) 2025-05-17 00:01:11.576858 | orchestrator | 00:01:11.576 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.576903 | orchestrator | 00:01:11.576 STDOUT terraform:  + mac_address = (known after apply) 2025-05-17 00:01:11.576946 | orchestrator | 00:01:11.576 STDOUT terraform:  + network_id = (known after apply) 2025-05-17 00:01:11.576989 | orchestrator | 00:01:11.576 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-17 00:01:11.577038 | orchestrator | 00:01:11.576 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-17 00:01:11.577081 | orchestrator | 00:01:11.577 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.577124 | orchestrator | 00:01:11.577 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-17 00:01:11.577166 | orchestrator | 00:01:11.577 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.577193 | orchestrator | 00:01:11.577 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.577229 | orchestrator | 00:01:11.577 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-17 00:01:11.577250 | orchestrator | 00:01:11.577 STDOUT terraform:  } 2025-05-17 00:01:11.577277 | orchestrator | 00:01:11.577 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.577313 | orchestrator | 00:01:11.577 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-17 00:01:11.577335 | orchestrator | 00:01:11.577 STDOUT terraform:  } 2025-05-17 00:01:11.577368 | orchestrator | 00:01:11.577 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.577404 | orchestrator | 00:01:11.577 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-17 00:01:11.577426 | orchestrator | 00:01:11.577 STDOUT terraform:  } 2025-05-17 00:01:11.577491 | orchestrator | 00:01:11.577 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.577537 | orchestrator | 00:01:11.577 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-17 00:01:11.577559 | orchestrator | 00:01:11.577 STDOUT terraform:  } 2025-05-17 00:01:11.577590 | orchestrator | 00:01:11.577 STDOUT terraform:  + binding (known after apply) 2025-05-17 00:01:11.577613 | orchestrator | 00:01:11.577 STDOUT terraform:  + fixed_ip { 2025-05-17 00:01:11.577646 | orchestrator | 00:01:11.577 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-17 00:01:11.577682 | orchestrator | 00:01:11.577 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-17 00:01:11.577703 | orchestrator | 00:01:11.577 STDOUT terraform:  } 2025-05-17 00:01:11.577725 | orchestrator | 00:01:11.577 STDOUT terraform:  } 2025-05-17 00:01:11.577778 | orchestrator | 00:01:11.577 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-17 00:01:11.577831 | orchestrator | 00:01:11.577 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-17 00:01:11.577877 | orchestrator | 00:01:11.577 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-17 00:01:11.577923 | orchestrator | 00:01:11.577 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-17 00:01:11.577966 | orchestrator | 00:01:11.577 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-17 00:01:11.578010 | orchestrator | 00:01:11.577 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.578069 | orchestrator | 00:01:11.578 STDOUT terraform:  + device_id = (known after apply) 2025-05-17 00:01:11.578112 | orchestrator | 00:01:11.578 STDOUT terraform:  + device_owner = (known after apply) 2025-05-17 00:01:11.578154 | orchestrator | 00:01:11.578 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-17 00:01:11.578199 | orchestrator | 00:01:11.578 STDOUT terraform:  + dns_name = (known after apply) 2025-05-17 00:01:11.578243 | orchestrator | 00:01:11.578 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.578286 | orchestrator | 00:01:11.578 STDOUT terraform:  + mac_address = (known after apply) 2025-05-17 00:01:11.578330 | orchestrator | 00:01:11.578 STDOUT terraform:  + network_id = (known after apply) 2025-05-17 00:01:11.578372 | orchestrator | 00:01:11.578 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-17 00:01:11.578416 | orchestrator | 00:01:11.578 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-17 00:01:11.578459 | orchestrator | 00:01:11.578 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.578518 | orchestrator | 00:01:11.578 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-17 00:01:11.578562 | orchestrator | 00:01:11.578 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.578595 | orchestrator | 00:01:11.578 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.578630 | orchestrator | 00:01:11.578 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-17 00:01:11.578650 | orchestrator | 00:01:11.578 STDOUT terraform:  } 2025-05-17 00:01:11.578677 | orchestrator | 00:01:11.578 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.578716 | orchestrator | 00:01:11.578 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-17 00:01:11.578739 | orchestrator | 00:01:11.578 STDOUT terraform:  } 2025-05-17 00:01:11.578765 | orchestrator | 00:01:11.578 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.578801 | orchestrator | 00:01:11.578 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-17 00:01:11.578823 | orchestrator | 00:01:11.578 STDOUT terraform:  } 2025-05-17 00:01:11.578849 | orchestrator | 00:01:11.578 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.578885 | orchestrator | 00:01:11.578 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-17 00:01:11.578905 | orchestrator | 00:01:11.578 STDOUT terraform:  } 2025-05-17 00:01:11.578935 | orchestrator | 00:01:11.578 STDOUT terraform:  + binding (known after apply) 2025-05-17 00:01:11.578957 | orchestrator | 00:01:11.578 STDOUT terraform:  + fixed_ip { 2025-05-17 00:01:11.578991 | orchestrator | 00:01:11.578 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-17 00:01:11.579029 | orchestrator | 00:01:11.579 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-17 00:01:11.579051 | orchestrator | 00:01:11.579 STDOUT terraform:  } 2025-05-17 00:01:11.579072 | orchestrator | 00:01:11.579 STDOUT terraform:  } 2025-05-17 00:01:11.579125 | orchestrator | 00:01:11.579 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-17 00:01:11.579180 | orchestrator | 00:01:11.579 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-17 00:01:11.579224 | orchestrator | 00:01:11.579 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-17 00:01:11.579266 | orchestrator | 00:01:11.579 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-17 00:01:11.579308 | orchestrator | 00:01:11.579 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-17 00:01:11.579351 | orchestrator | 00:01:11.579 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.579394 | orchestrator | 00:01:11.579 STDOUT terraform:  + device_id = (known after apply) 2025-05-17 00:01:11.579437 | orchestrator | 00:01:11.579 STDOUT terraform:  + device_owner = (known after apply) 2025-05-17 00:01:11.579501 | orchestrator | 00:01:11.579 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-17 00:01:11.579552 | orchestrator | 00:01:11.579 STDOUT terraform:  + dns_name = (known after apply) 2025-05-17 00:01:11.579596 | orchestrator | 00:01:11.579 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.579639 | orchestrator | 00:01:11.579 STDOUT terraform:  + mac_address = (known after apply) 2025-05-17 00:01:11.579682 | orchestrator | 00:01:11.579 STDOUT terraform:  + network_id = (known after apply) 2025-05-17 00:01:11.579729 | orchestrator | 00:01:11.579 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-17 00:01:11.579771 | orchestrator | 00:01:11.579 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-17 00:01:11.579816 | orchestrator | 00:01:11.579 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.579858 | orchestrator | 00:01:11.579 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-17 00:01:11.579911 | orchestrator | 00:01:11.579 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.579956 | orchestrator | 00:01:11.579 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.579995 | orchestrator | 00:01:11.579 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-17 00:01:11.580016 | orchestrator | 00:01:11.580 STDOUT terraform:  } 2025-05-17 00:01:11.580042 | orchestrator | 00:01:11.580 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.580080 | orchestrator | 00:01:11.580 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-17 00:01:11.580101 | orchestrator | 00:01:11.580 STDOUT terraform:  } 2025-05-17 00:01:11.580130 | orchestrator | 00:01:11.580 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.580168 | orchestrator | 00:01:11.580 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-17 00:01:11.580189 | orchestrator | 00:01:11.580 STDOUT terraform:  } 2025-05-17 00:01:11.580216 | orchestrator | 00:01:11.580 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.580252 | orchestrator | 00:01:11.580 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-17 00:01:11.580272 | orchestrator | 00:01:11.580 STDOUT terraform:  } 2025-05-17 00:01:11.580302 | orchestrator | 00:01:11.580 STDOUT terraform:  + binding (known after apply) 2025-05-17 00:01:11.580323 | orchestrator | 00:01:11.580 STDOUT terraform:  + fixed_ip { 2025-05-17 00:01:11.580354 | orchestrator | 00:01:11.580 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-17 00:01:11.580391 | orchestrator | 00:01:11.580 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-17 00:01:11.580412 | orchestrator | 00:01:11.580 STDOUT terraform:  } 2025-05-17 00:01:11.580432 | orchestrator | 00:01:11.580 STDOUT terraform:  } 2025-05-17 00:01:11.580510 | orchestrator | 00:01:11.580 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-17 00:01:11.580564 | orchestrator | 00:01:11.580 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-17 00:01:11.580609 | orchestrator | 00:01:11.580 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-17 00:01:11.580653 | orchestrator | 00:01:11.580 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-17 00:01:11.580697 | orchestrator | 00:01:11.580 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-17 00:01:11.580741 | orchestrator | 00:01:11.580 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.580786 | orchestrator | 00:01:11.580 STDOUT terraform:  + device_id = (known after apply) 2025-05-17 00:01:11.580830 | orchestrator | 00:01:11.580 STDOUT terraform:  + device_owner = (known after apply) 2025-05-17 00:01:11.580880 | orchestrator | 00:01:11.580 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-17 00:01:11.580924 | orchestrator | 00:01:11.580 STDOUT terraform:  + dns_name = (known after apply) 2025-05-17 00:01:11.580969 | orchestrator | 00:01:11.580 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.581014 | orchestrator | 00:01:11.580 STDOUT terraform:  + mac_address = (known after apply) 2025-05-17 00:01:11.581059 | orchestrator | 00:01:11.581 STDOUT terraform:  + network_id = (known after apply) 2025-05-17 00:01:11.581103 | orchestrator | 00:01:11.581 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-17 00:01:11.581146 | orchestrator | 00:01:11.581 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-17 00:01:11.581192 | orchestrator | 00:01:11.581 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.581236 | orchestrator | 00:01:11.581 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-17 00:01:11.581280 | orchestrator | 00:01:11.581 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.581308 | orchestrator | 00:01:11.581 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.581345 | orchestrator | 00:01:11.581 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-17 00:01:11.581368 | orchestrator | 00:01:11.581 STDOUT terraform:  } 2025-05-17 00:01:11.581396 | orchestrator | 00:01:11.581 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.581433 | orchestrator | 00:01:11.581 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-17 00:01:11.581455 | orchestrator | 00:01:11.581 STDOUT terraform:  } 2025-05-17 00:01:11.581505 | orchestrator | 00:01:11.581 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.581549 | orchestrator | 00:01:11.581 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-17 00:01:11.581571 | orchestrator | 00:01:11.581 STDOUT terraform:  } 2025-05-17 00:01:11.581599 | orchestrator | 00:01:11.581 STDOUT terraform:  + allowed_address_pairs { 2025-05-17 00:01:11.581634 | orchestrator | 00:01:11.581 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-17 00:01:11.581656 | orchestrator | 00:01:11.581 STDOUT terraform:  } 2025-05-17 00:01:11.581687 | orchestrator | 00:01:11.581 STDOUT terraform:  + binding (known after apply) 2025-05-17 00:01:11.581709 | orchestrator | 00:01:11.581 STDOUT terraform:  + fixed_ip { 2025-05-17 00:01:11.581744 | orchestrator | 00:01:11.581 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-17 00:01:11.581782 | orchestrator | 00:01:11.581 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-17 00:01:11.581804 | orchestrator | 00:01:11.581 STDOUT terraform:  } 2025-05-17 00:01:11.581825 | orchestrator | 00:01:11.581 STDOUT terraform:  } 2025-05-17 00:01:11.581880 | orchestrator | 00:01:11.581 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-17 00:01:11.581935 | orchestrator | 00:01:11.581 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-17 00:01:11.581966 | orchestrator | 00:01:11.581 STDOUT terraform:  + force_destroy = false 2025-05-17 00:01:11.582008 | orchestrator | 00:01:11.581 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.582072 | orchestrator | 00:01:11.582 STDOUT terraform:  + port_id = (known after apply) 2025-05-17 00:01:11.582110 | orchestrator | 00:01:11.582 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.582145 | orchestrator | 00:01:11.582 STDOUT terraform:  + router_id = (known after apply) 2025-05-17 00:01:11.582184 | orchestrator | 00:01:11.582 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-17 00:01:11.582204 | orchestrator | 00:01:11.582 STDOUT terraform:  } 2025-05-17 00:01:11.582247 | orchestrator | 00:01:11.582 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-17 00:01:11.582289 | orchestrator | 00:01:11.582 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-17 00:01:11.582336 | orchestrator | 00:01:11.582 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-17 00:01:11.582380 | orchestrator | 00:01:11.582 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.582410 | orchestrator | 00:01:11.582 STDOUT terraform:  + availability_zone_hints = [ 2025-05-17 00:01:11.582431 | orchestrator | 00:01:11.582 STDOUT terraform:  + "nova", 2025-05-17 00:01:11.582453 | orchestrator | 00:01:11.582 STDOUT terraform:  ] 2025-05-17 00:01:11.582530 | orchestrator | 00:01:11.582 STDOUT terraform:  + distributed = (known after apply) 2025-05-17 00:01:11.582577 | orchestrator | 00:01:11.582 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-17 00:01:11.582634 | orchestrator | 00:01:11.582 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-17 00:01:11.582679 | orchestrator | 00:01:11.582 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.582717 | orchestrator | 00:01:11.582 STDOUT terraform:  + name = "testbed" 2025-05-17 00:01:11.582761 | orchestrator | 00:01:11.582 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.582805 | orchestrator | 00:01:11.582 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.582842 | orchestrator | 00:01:11.582 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-17 00:01:11.582864 | orchestrator | 00:01:11.582 STDOUT terraform:  } 2025-05-17 00:01:11.582927 | orchestrator | 00:01:11.582 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-17 00:01:11.582989 | orchestrator | 00:01:11.582 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-17 00:01:11.583018 | orchestrator | 00:01:11.582 STDOUT terraform:  + description = "ssh" 2025-05-17 00:01:11.583051 | orchestrator | 00:01:11.583 STDOUT terraform:  + direction = "ingress" 2025-05-17 00:01:11.583080 | orchestrator | 00:01:11.583 STDOUT terraform:  + ethertype = "IPv4" 2025-05-17 00:01:11.583118 | orchestrator | 00:01:11.583 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.583146 | orchestrator | 00:01:11.583 STDOUT terraform:  + port_range_max = 22 2025-05-17 00:01:11.583175 | orchestrator | 00:01:11.583 STDOUT terraform:  + port_range_min = 22 2025-05-17 00:01:11.583209 | orchestrator | 00:01:11.583 STDOUT terraform:  + protocol = "tcp" 2025-05-17 00:01:11.583249 | orchestrator | 00:01:11.583 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.583287 | orchestrator | 00:01:11.583 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-17 00:01:11.583320 | orchestrator | 00:01:11.583 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-17 00:01:11.583357 | orchestrator | 00:01:11.583 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-17 00:01:11.583395 | orchestrator | 00:01:11.583 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.583417 | orchestrator | 00:01:11.583 STDOUT terraform:  } 2025-05-17 00:01:11.583496 | orchestrator | 00:01:11.583 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-17 00:01:11.583563 | orchestrator | 00:01:11.583 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-17 00:01:11.583599 | orchestrator | 00:01:11.583 STDOUT terraform:  + description = "wireguard" 2025-05-17 00:01:11.583631 | orchestrator | 00:01:11.583 STDOUT terraform:  + direction = "ingress" 2025-05-17 00:01:11.583658 | orchestrator | 00:01:11.583 STDOUT terraform:  + ethertype = "IPv4" 2025-05-17 00:01:11.583696 | orchestrator | 00:01:11.583 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.583724 | orchestrator | 00:01:11.583 STDOUT terraform:  + port_range_max = 51820 2025-05-17 00:01:11.583752 | orchestrator | 00:01:11.583 STDOUT terraform:  + port_range_min = 51820 2025-05-17 00:01:11.583779 | orchestrator | 00:01:11.583 STDOUT terraform:  + protocol = "udp" 2025-05-17 00:01:11.583818 | orchestrator | 00:01:11.583 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.583856 | orchestrator | 00:01:11.583 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-17 00:01:11.583888 | orchestrator | 00:01:11.583 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-17 00:01:11.583925 | orchestrator | 00:01:11.583 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-17 00:01:11.583962 | orchestrator | 00:01:11.583 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.583983 | orchestrator | 00:01:11.583 STDOUT terraform:  } 2025-05-17 00:01:11.584045 | orchestrator | 00:01:11.583 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-17 00:01:11.584106 | orchestrator | 00:01:11.584 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-17 00:01:11.584139 | orchestrator | 00:01:11.584 STDOUT terraform:  + direction = "ingress" 2025-05-17 00:01:11.584169 | orchestrator | 00:01:11.584 STDOUT terraform:  + ethertype = "IPv4" 2025-05-17 00:01:11.584208 | orchestrator | 00:01:11.584 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.584237 | orchestrator | 00:01:11.584 STDOUT terraform:  + protocol = "tcp" 2025-05-17 00:01:11.584275 | orchestrator | 00:01:11.584 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.584317 | orchestrator | 00:01:11.584 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-17 00:01:11.584355 | orchestrator | 00:01:11.584 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-17 00:01:11.584393 | orchestrator | 00:01:11.584 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-17 00:01:11.584430 | orchestrator | 00:01:11.584 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.584452 | orchestrator | 00:01:11.584 STDOUT terraform:  } 2025-05-17 00:01:11.584538 | orchestrator | 00:01:11.584 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-17 00:01:11.584601 | orchestrator | 00:01:11.584 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-17 00:01:11.584633 | orchestrator | 00:01:11.584 STDOUT terraform:  + direction = "ingress" 2025-05-17 00:01:11.584665 | orchestrator | 00:01:11.584 STDOUT terraform:  + ethertype = "IPv4" 2025-05-17 00:01:11.584704 | orchestrator | 00:01:11.584 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.584738 | orchestrator | 00:01:11.584 STDOUT terraform:  + protocol = "udp" 2025-05-17 00:01:11.584777 | orchestrator | 00:01:11.584 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.584814 | orchestrator | 00:01:11.584 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-17 00:01:11.584852 | orchestrator | 00:01:11.584 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-17 00:01:11.584888 | orchestrator | 00:01:11.584 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-17 00:01:11.584926 | orchestrator | 00:01:11.584 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.584948 | orchestrator | 00:01:11.584 STDOUT terraform:  } 2025-05-17 00:01:11.585010 | orchestrator | 00:01:11.584 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-17 00:01:11.585070 | orchestrator | 00:01:11.585 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-17 00:01:11.585103 | orchestrator | 00:01:11.585 STDOUT terraform:  + direction = "ingress" 2025-05-17 00:01:11.585132 | orchestrator | 00:01:11.585 STDOUT terraform:  + ethertype = "IPv4" 2025-05-17 00:01:11.585170 | orchestrator | 00:01:11.585 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.585199 | orchestrator | 00:01:11.585 STDOUT terraform:  + protocol = "icmp" 2025-05-17 00:01:11.585240 | orchestrator | 00:01:11.585 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.585277 | orchestrator | 00:01:11.585 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-17 00:01:11.585310 | orchestrator | 00:01:11.585 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-17 00:01:11.585347 | orchestrator | 00:01:11.585 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-17 00:01:11.585384 | orchestrator | 00:01:11.585 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.585405 | orchestrator | 00:01:11.585 STDOUT terraform:  } 2025-05-17 00:01:11.585463 | orchestrator | 00:01:11.585 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-17 00:01:11.585555 | orchestrator | 00:01:11.585 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-17 00:01:11.585586 | orchestrator | 00:01:11.585 STDOUT terraform:  + direction = "ingress" 2025-05-17 00:01:11.585617 | orchestrator | 00:01:11.585 STDOUT terraform:  + ethertype = "IPv4" 2025-05-17 00:01:11.585655 | orchestrator | 00:01:11.585 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.585685 | orchestrator | 00:01:11.585 STDOUT terraform:  + protocol = "tcp" 2025-05-17 00:01:11.585722 | orchestrator | 00:01:11.585 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.585760 | orchestrator | 00:01:11.585 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-17 00:01:11.585793 | orchestrator | 00:01:11.585 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-17 00:01:11.585830 | orchestrator | 00:01:11.585 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-17 00:01:11.585867 | orchestrator | 00:01:11.585 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.585890 | orchestrator | 00:01:11.585 STDOUT terraform:  } 2025-05-17 00:01:11.585948 | orchestrator | 00:01:11.585 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-17 00:01:11.586007 | orchestrator | 00:01:11.585 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-17 00:01:11.586055 | orchestrator | 00:01:11.586 STDOUT terraform:  + direction = "ingress" 2025-05-17 00:01:11.586086 | orchestrator | 00:01:11.586 STDOUT terraform:  + ethertype = "IPv4" 2025-05-17 00:01:11.586124 | orchestrator | 00:01:11.586 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.586152 | orchestrator | 00:01:11.586 STDOUT terraform:  + protocol = "udp" 2025-05-17 00:01:11.586190 | orchestrator | 00:01:11.586 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.586227 | orchestrator | 00:01:11.586 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-17 00:01:11.586260 | orchestrator | 00:01:11.586 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-17 00:01:11.586297 | orchestrator | 00:01:11.586 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-17 00:01:11.586334 | orchestrator | 00:01:11.586 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.586357 | orchestrator | 00:01:11.586 STDOUT terraform:  } 2025-05-17 00:01:11.586416 | orchestrator | 00:01:11.586 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-17 00:01:11.586490 | orchestrator | 00:01:11.586 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-17 00:01:11.586525 | orchestrator | 00:01:11.586 STDOUT terraform:  + direction = "ingress" 2025-05-17 00:01:11.586557 | orchestrator | 00:01:11.586 STDOUT terraform:  + ethertype = "IPv4" 2025-05-17 00:01:11.586597 | orchestrator | 00:01:11.586 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.586626 | orchestrator | 00:01:11.586 STDOUT terraform:  + protocol = "icmp" 2025-05-17 00:01:11.586671 | orchestrator | 00:01:11.586 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.586713 | orchestrator | 00:01:11.586 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-17 00:01:11.586747 | orchestrator | 00:01:11.586 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-17 00:01:11.586785 | orchestrator | 00:01:11.586 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-17 00:01:11.586823 | orchestrator | 00:01:11.586 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.586845 | orchestrator | 00:01:11.586 STDOUT terraform:  } 2025-05-17 00:01:11.586902 | orchestrator | 00:01:11.586 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-17 00:01:11.586961 | orchestrator | 00:01:11.586 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-17 00:01:11.586990 | orchestrator | 00:01:11.586 STDOUT terraform:  + description = "vrrp" 2025-05-17 00:01:11.587025 | orchestrator | 00:01:11.587 STDOUT terraform:  + direction = "ingress" 2025-05-17 00:01:11.587053 | orchestrator | 00:01:11.587 STDOUT terraform:  + ethertype = "IPv4" 2025-05-17 00:01:11.587092 | orchestrator | 00:01:11.587 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.587120 | orchestrator | 00:01:11.587 STDOUT terraform:  + protocol = "112" 2025-05-17 00:01:11.587160 | orchestrator | 00:01:11.587 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.587198 | orchestrator | 00:01:11.587 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-17 00:01:11.587231 | orchestrator | 00:01:11.587 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-17 00:01:11.587267 | orchestrator | 00:01:11.587 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-17 00:01:11.587305 | orchestrator | 00:01:11.587 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.587325 | orchestrator | 00:01:11.587 STDOUT terraform:  } 2025-05-17 00:01:11.587381 | orchestrator | 00:01:11.587 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-17 00:01:11.587436 | orchestrator | 00:01:11.587 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-17 00:01:11.587489 | orchestrator | 00:01:11.587 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.587531 | orchestrator | 00:01:11.587 STDOUT terraform:  + description = "management security group" 2025-05-17 00:01:11.587672 | orchestrator | 00:01:11.587 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.587722 | orchestrator | 00:01:11.587 STDOUT terraform:  + name = "testbed-management" 2025-05-17 00:01:11.587760 | orchestrator | 00:01:11.587 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.587799 | orchestrator | 00:01:11.587 STDOUT terraform:  + stateful = (known after apply) 2025-05-17 00:01:11.587837 | orchestrator | 00:01:11.587 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.587860 | orchestrator | 00:01:11.587 STDOUT terraform:  } 2025-05-17 00:01:11.587921 | orchestrator | 00:01:11.587 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-17 00:01:11.587976 | orchestrator | 00:01:11.587 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-17 00:01:11.588016 | orchestrator | 00:01:11.587 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.588053 | orchestrator | 00:01:11.588 STDOUT terraform:  + description = "node security group" 2025-05-17 00:01:11.588094 | orchestrator | 00:01:11.588 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.588126 | orchestrator | 00:01:11.588 STDOUT terraform:  + name = "testbed-node" 2025-05-17 00:01:11.588163 | orchestrator | 00:01:11.588 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.588199 | orchestrator | 00:01:11.588 STDOUT terraform:  + stateful = (known after apply) 2025-05-17 00:01:11.588236 | orchestrator | 00:01:11.588 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.588258 | orchestrator | 00:01:11.588 STDOUT terraform:  } 2025-05-17 00:01:11.588313 | orchestrator | 00:01:11.588 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-17 00:01:11.588367 | orchestrator | 00:01:11.588 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-17 00:01:11.588407 | orchestrator | 00:01:11.588 STDOUT terraform:  + all_tags = (known after apply) 2025-05-17 00:01:11.588447 | orchestrator | 00:01:11.588 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-17 00:01:11.588510 | orchestrator | 00:01:11.588 STDOUT terraform:  + dns_nameservers = [ 2025-05-17 00:01:11.588539 | orchestrator | 00:01:11.588 STDOUT terraform:  + "8.8.8.8", 2025-05-17 00:01:11.588563 | orchestrator | 00:01:11.588 STDOUT terraform:  + "9.9.9.9", 2025-05-17 00:01:11.588585 | orchestrator | 00:01:11.588 STDOUT terraform:  ] 2025-05-17 00:01:11.588617 | orchestrator | 00:01:11.588 STDOUT terraform:  + enable_dhcp = true 2025-05-17 00:01:11.588655 | orchestrator | 00:01:11.588 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-17 00:01:11.588695 | orchestrator | 00:01:11.588 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.588725 | orchestrator | 00:01:11.588 STDOUT terraform:  + ip_version = 4 2025-05-17 00:01:11.588764 | orchestrator | 00:01:11.588 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-17 00:01:11.588804 | orchestrator | 00:01:11.588 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-17 00:01:11.588852 | orchestrator | 00:01:11.588 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-17 00:01:11.588893 | orchestrator | 00:01:11.588 STDOUT terraform:  + network_id = (known after apply) 2025-05-17 00:01:11.588922 | orchestrator | 00:01:11.588 STDOUT terraform:  + no_gateway = false 2025-05-17 00:01:11.588964 | orchestrator | 00:01:11.588 STDOUT terraform:  + region = (known after apply) 2025-05-17 00:01:11.589004 | orchestrator | 00:01:11.588 STDOUT terraform:  + service_types = (known after apply) 2025-05-17 00:01:11.589042 | orchestrator | 00:01:11.589 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-17 00:01:11.589079 | orchestrator | 00:01:11.589 STDOUT terraform:  + allocation_pool { 2025-05-17 00:01:11.589113 | orchestrator | 00:01:11.589 STDOUT terraform:  + end = "192.168.31.250" 2025-05-17 00:01:11.589147 | orchestrator | 00:01:11.589 STDOUT terraform:  + start = "192.168.31.200" 2025-05-17 00:01:11.589170 | orchestrator | 00:01:11.589 STDOUT terraform:  } 2025-05-17 00:01:11.589191 | orchestrator | 00:01:11.589 STDOUT terraform:  } 2025-05-17 00:01:11.589224 | orchestrator | 00:01:11.589 STDOUT terraform:  # terraform_data.image will be created 2025-05-17 00:01:11.589258 | orchestrator | 00:01:11.589 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-17 00:01:11.589290 | orchestrator | 00:01:11.589 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.589320 | orchestrator | 00:01:11.589 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-17 00:01:11.589352 | orchestrator | 00:01:11.589 STDOUT terraform:  + output = (known after apply) 2025-05-17 00:01:11.589373 | orchestrator | 00:01:11.589 STDOUT terraform:  } 2025-05-17 00:01:11.589411 | orchestrator | 00:01:11.589 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-17 00:01:11.589447 | orchestrator | 00:01:11.589 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-17 00:01:11.589496 | orchestrator | 00:01:11.589 STDOUT terraform:  + id = (known after apply) 2025-05-17 00:01:11.589527 | orchestrator | 00:01:11.589 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-17 00:01:11.589558 | orchestrator | 00:01:11.589 STDOUT terraform:  + output = (known after apply) 2025-05-17 00:01:11.589583 | orchestrator | 00:01:11.589 STDOUT terraform:  } 2025-05-17 00:01:11.589621 | orchestrator | 00:01:11.589 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-17 00:01:11.589645 | orchestrator | 00:01:11.589 STDOUT terraform: Changes to Outputs: 2025-05-17 00:01:11.589677 | orchestrator | 00:01:11.589 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-17 00:01:11.589709 | orchestrator | 00:01:11.589 STDOUT terraform:  + private_key = (sensitive value) 2025-05-17 00:01:11.769762 | orchestrator | 00:01:11.769 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-17 00:01:11.769837 | orchestrator | 00:01:11.769 STDOUT terraform: terraform_data.image: Creating... 2025-05-17 00:01:11.770594 | orchestrator | 00:01:11.770 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=092fd804-3a67-fb82-42d7-6edf022204bb] 2025-05-17 00:01:11.770919 | orchestrator | 00:01:11.770 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=2c84d491-9186-08e2-50cc-33e125b64e0e] 2025-05-17 00:01:11.788638 | orchestrator | 00:01:11.788 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-17 00:01:11.789787 | orchestrator | 00:01:11.789 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-17 00:01:11.792739 | orchestrator | 00:01:11.792 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-17 00:01:11.794241 | orchestrator | 00:01:11.794 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-17 00:01:11.795689 | orchestrator | 00:01:11.795 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-17 00:01:11.799740 | orchestrator | 00:01:11.799 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-17 00:01:11.800669 | orchestrator | 00:01:11.800 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-17 00:01:11.801172 | orchestrator | 00:01:11.801 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-17 00:01:11.801985 | orchestrator | 00:01:11.801 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-17 00:01:11.802277 | orchestrator | 00:01:11.802 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-17 00:01:12.279697 | orchestrator | 00:01:12.279 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-05-17 00:01:12.285628 | orchestrator | 00:01:12.285 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-17 00:01:12.288055 | orchestrator | 00:01:12.287 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-17 00:01:12.291115 | orchestrator | 00:01:12.290 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-17 00:01:12.295413 | orchestrator | 00:01:12.295 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-17 00:01:12.299596 | orchestrator | 00:01:12.299 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-17 00:01:17.742733 | orchestrator | 00:01:17.742 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=38fecd1d-64c2-42d1-a9d7-a8ee95d61f55] 2025-05-17 00:01:17.761606 | orchestrator | 00:01:17.761 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-17 00:01:17.766629 | orchestrator | 00:01:17.766 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=03970b32333b5f44561d821a5a631b114c091444] 2025-05-17 00:01:17.782943 | orchestrator | 00:01:17.782 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-17 00:01:17.788585 | orchestrator | 00:01:17.788 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=4995cc06851889fc022a4e33ec5ea1915cb74083] 2025-05-17 00:01:17.797374 | orchestrator | 00:01:17.797 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-17 00:01:21.796034 | orchestrator | 00:01:21.795 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-17 00:01:21.797041 | orchestrator | 00:01:21.796 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-17 00:01:21.797182 | orchestrator | 00:01:21.797 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-17 00:01:21.802307 | orchestrator | 00:01:21.801 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-17 00:01:21.803797 | orchestrator | 00:01:21.803 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-17 00:01:21.803953 | orchestrator | 00:01:21.803 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-17 00:01:22.289249 | orchestrator | 00:01:22.288 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-17 00:01:22.296416 | orchestrator | 00:01:22.296 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-17 00:01:22.300830 | orchestrator | 00:01:22.300 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-17 00:01:22.366592 | orchestrator | 00:01:22.366 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=8746963d-35d6-4275-a53f-fa471798b09a] 2025-05-17 00:01:22.383352 | orchestrator | 00:01:22.383 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-17 00:01:22.385310 | orchestrator | 00:01:22.385 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=4c541808-fecb-473a-bfa6-e6107b1a17c0] 2025-05-17 00:01:22.392282 | orchestrator | 00:01:22.392 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-17 00:01:22.401290 | orchestrator | 00:01:22.400 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=e3068b10-d912-449c-8868-8ffe0bc578f0] 2025-05-17 00:01:22.413842 | orchestrator | 00:01:22.413 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-17 00:01:22.414341 | orchestrator | 00:01:22.414 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=6120ef73-2521-4d83-8ac9-34a2289f978b] 2025-05-17 00:01:22.416285 | orchestrator | 00:01:22.416 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=0e5716a4-9f06-4595-a8e5-44869be2d3e3] 2025-05-17 00:01:22.423209 | orchestrator | 00:01:22.422 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-17 00:01:22.423261 | orchestrator | 00:01:22.423 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-17 00:01:22.425201 | orchestrator | 00:01:22.425 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=bec56d32-b1fb-48c0-a20f-a6daa2f9686d] 2025-05-17 00:01:22.429152 | orchestrator | 00:01:22.429 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-17 00:01:22.484207 | orchestrator | 00:01:22.483 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=6fc6848d-5127-4f65-b412-e829995e25e7] 2025-05-17 00:01:22.492008 | orchestrator | 00:01:22.491 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-17 00:01:22.500981 | orchestrator | 00:01:22.500 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=c9243530-1d89-4c38-b4ef-a9d7ed453cca] 2025-05-17 00:01:22.510996 | orchestrator | 00:01:22.510 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=4ddb2821-e209-41e3-b031-9f23c5adf4cf] 2025-05-17 00:01:27.798675 | orchestrator | 00:01:27.798 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-17 00:01:28.257824 | orchestrator | 00:01:28.257 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=5429789d-885a-4f80-a71f-930b52b349ee] 2025-05-17 00:01:28.350849 | orchestrator | 00:01:28.350 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=7663d883-e7ff-43d6-ba64-8cf61b97e8d2] 2025-05-17 00:01:28.358385 | orchestrator | 00:01:28.358 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-17 00:01:32.385076 | orchestrator | 00:01:32.384 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-17 00:01:32.393218 | orchestrator | 00:01:32.392 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-17 00:01:32.414527 | orchestrator | 00:01:32.414 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-17 00:01:32.423761 | orchestrator | 00:01:32.423 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-17 00:01:32.423827 | orchestrator | 00:01:32.423 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-17 00:01:32.430213 | orchestrator | 00:01:32.429 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-17 00:01:32.724864 | orchestrator | 00:01:32.724 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=0216f665-ca85-43be-85f8-4def2235ea50] 2025-05-17 00:01:32.784174 | orchestrator | 00:01:32.783 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=6ebcf83d-a258-4df5-8538-7e1cda047c8b] 2025-05-17 00:01:32.792173 | orchestrator | 00:01:32.791 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=d47211d7-96e0-4a69-a671-7a86a44a64cc] 2025-05-17 00:01:32.806131 | orchestrator | 00:01:32.805 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=8f19b7c7-8ad2-4322-8bec-185edfc09a4c] 2025-05-17 00:01:32.807249 | orchestrator | 00:01:32.807 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=401efc10-68df-4215-9146-18eb1d7fe997] 2025-05-17 00:01:32.900169 | orchestrator | 00:01:32.899 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=f0387a27-7964-46b7-9f4d-05429c377c18] 2025-05-17 00:01:36.088911 | orchestrator | 00:01:36.088 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=863d8345-9393-4b58-9c92-603cb97e8e03] 2025-05-17 00:01:36.095266 | orchestrator | 00:01:36.095 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-17 00:01:36.097769 | orchestrator | 00:01:36.097 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-17 00:01:36.099564 | orchestrator | 00:01:36.099 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-17 00:01:36.308906 | orchestrator | 00:01:36.308 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=63e959ba-361a-4028-b248-bc49d42de70f] 2025-05-17 00:01:36.321203 | orchestrator | 00:01:36.320 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-17 00:01:36.321275 | orchestrator | 00:01:36.321 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-17 00:01:36.322255 | orchestrator | 00:01:36.322 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-17 00:01:36.325326 | orchestrator | 00:01:36.325 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-17 00:01:36.329348 | orchestrator | 00:01:36.329 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-17 00:01:36.332040 | orchestrator | 00:01:36.331 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-17 00:01:36.333908 | orchestrator | 00:01:36.333 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-17 00:01:36.338099 | orchestrator | 00:01:36.337 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-17 00:01:36.710364 | orchestrator | 00:01:36.709 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=59836605-ac5f-4488-8031-ef208ae9ff88] 2025-05-17 00:01:36.727168 | orchestrator | 00:01:36.726 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-17 00:01:36.795842 | orchestrator | 00:01:36.795 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=29671486-56b7-4819-893f-c52c2fff005d] 2025-05-17 00:01:36.805220 | orchestrator | 00:01:36.804 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-17 00:01:36.950390 | orchestrator | 00:01:36.949 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=ba9a8015-043d-41d3-9c97-57ddc97d403a] 2025-05-17 00:01:36.959056 | orchestrator | 00:01:36.958 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-17 00:01:37.102270 | orchestrator | 00:01:37.101 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=625a6aa6-33ea-4723-b144-c2879804ce1e] 2025-05-17 00:01:37.103066 | orchestrator | 00:01:37.102 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=ac93c2e0-3126-4675-a3dd-ceec9fc98e8d] 2025-05-17 00:01:37.109711 | orchestrator | 00:01:37.109 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-17 00:01:37.115519 | orchestrator | 00:01:37.115 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-17 00:01:37.261946 | orchestrator | 00:01:37.261 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=6625e8a0-0731-47bf-bb31-53847b56169f] 2025-05-17 00:01:37.270626 | orchestrator | 00:01:37.270 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-17 00:01:37.351572 | orchestrator | 00:01:37.351 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=db459bee-a961-4372-9d20-9ce9cf4e5a1a] 2025-05-17 00:01:37.357925 | orchestrator | 00:01:37.357 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-17 00:01:37.497192 | orchestrator | 00:01:37.496 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=d88979a5-7150-4d01-9393-93f5ab0df867] 2025-05-17 00:01:37.505501 | orchestrator | 00:01:37.505 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-17 00:01:37.667847 | orchestrator | 00:01:37.667 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=6a829ad0-c9a5-4b8d-945e-1044be73882c] 2025-05-17 00:01:37.825789 | orchestrator | 00:01:37.825 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=6f5e5b19-3de9-4f45-98bd-017dc82cafa1] 2025-05-17 00:01:41.921370 | orchestrator | 00:01:41.921 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=dd34f0c4-4e38-4973-a767-764ba0e4d6f2] 2025-05-17 00:01:41.944182 | orchestrator | 00:01:41.943 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=be52ba64-7fde-443b-85e8-801918a53449] 2025-05-17 00:01:41.945223 | orchestrator | 00:01:41.945 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=95a19887-5c1b-40e3-9c78-1643068e3936] 2025-05-17 00:01:42.343535 | orchestrator | 00:01:42.343 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 5s [id=17f0277b-5b65-47a4-887b-18a8b0475fe9] 2025-05-17 00:01:42.634410 | orchestrator | 00:01:42.633 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=2d07c8ea-6993-489e-a48a-8730f9fd5482] 2025-05-17 00:01:42.794202 | orchestrator | 00:01:42.793 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 7s [id=1a5c0a29-1392-40f4-9cec-dc068a4158e3] 2025-05-17 00:01:42.816069 | orchestrator | 00:01:42.815 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 7s [id=66db5433-ed98-4b17-af95-09b1b83a8651] 2025-05-17 00:01:43.880870 | orchestrator | 00:01:43.880 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=00001426-e7f9-4ec9-a35e-a89952f8a4f4] 2025-05-17 00:01:43.897179 | orchestrator | 00:01:43.896 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-17 00:01:43.919675 | orchestrator | 00:01:43.919 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-17 00:01:43.920130 | orchestrator | 00:01:43.920 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-17 00:01:43.921318 | orchestrator | 00:01:43.921 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-17 00:01:43.926693 | orchestrator | 00:01:43.926 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-17 00:01:43.931390 | orchestrator | 00:01:43.931 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-17 00:01:43.932444 | orchestrator | 00:01:43.932 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-17 00:01:50.216267 | orchestrator | 00:01:50.215 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=04ba3319-9467-4118-a856-a3c8575bef47] 2025-05-17 00:01:50.226352 | orchestrator | 00:01:50.226 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-17 00:01:50.232476 | orchestrator | 00:01:50.232 STDOUT terraform: local_file.inventory: Creating... 2025-05-17 00:01:50.233268 | orchestrator | 00:01:50.233 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-17 00:01:50.239302 | orchestrator | 00:01:50.238 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=ec9eca104d958caee09087975d6798eb0bb86b60] 2025-05-17 00:01:50.239698 | orchestrator | 00:01:50.239 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=30d1748c77c82795d6e88e4b1ba40d79e0a8dfb3] 2025-05-17 00:01:51.326959 | orchestrator | 00:01:51.326 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=04ba3319-9467-4118-a856-a3c8575bef47] 2025-05-17 00:01:53.922052 | orchestrator | 00:01:53.921 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-17 00:01:53.922234 | orchestrator | 00:01:53.921 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-17 00:01:53.922324 | orchestrator | 00:01:53.922 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-17 00:01:53.929198 | orchestrator | 00:01:53.928 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-17 00:01:53.934454 | orchestrator | 00:01:53.934 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-17 00:01:53.939661 | orchestrator | 00:01:53.939 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-17 00:02:03.922243 | orchestrator | 00:02:03.921 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-17 00:02:03.922422 | orchestrator | 00:02:03.922 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-17 00:02:03.922512 | orchestrator | 00:02:03.922 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-17 00:02:03.930325 | orchestrator | 00:02:03.930 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-17 00:02:03.934597 | orchestrator | 00:02:03.934 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-17 00:02:03.940852 | orchestrator | 00:02:03.940 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-17 00:02:04.377735 | orchestrator | 00:02:04.377 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=23ffb604-bc7f-49b0-8701-2be56ed9c23e] 2025-05-17 00:02:04.464538 | orchestrator | 00:02:04.464 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=f5a22f55-f1bf-49f4-997a-2dff898af713] 2025-05-17 00:02:04.529146 | orchestrator | 00:02:04.528 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=378371c5-fd9b-4423-97c3-5725eb76501d] 2025-05-17 00:02:04.542808 | orchestrator | 00:02:04.542 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=27381752-0274-4a11-81e5-6c8ee0106f16] 2025-05-17 00:02:13.924650 | orchestrator | 00:02:13.924 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-05-17 00:02:13.930631 | orchestrator | 00:02:13.930 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-05-17 00:02:14.497354 | orchestrator | 00:02:14.496 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=65cebf97-ac10-481a-90e0-164e0d081e65] 2025-05-17 00:02:14.641088 | orchestrator | 00:02:14.640 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=3343d1b7-fc00-46fa-9fbc-25feaed27a7d] 2025-05-17 00:02:14.660223 | orchestrator | 00:02:14.660 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-17 00:02:14.673336 | orchestrator | 00:02:14.673 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=4669773372498713431] 2025-05-17 00:02:14.675844 | orchestrator | 00:02:14.675 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-17 00:02:14.677653 | orchestrator | 00:02:14.677 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-17 00:02:14.678582 | orchestrator | 00:02:14.678 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-17 00:02:14.678753 | orchestrator | 00:02:14.678 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-17 00:02:14.679782 | orchestrator | 00:02:14.679 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-17 00:02:14.681499 | orchestrator | 00:02:14.681 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-17 00:02:14.682178 | orchestrator | 00:02:14.682 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-17 00:02:14.683148 | orchestrator | 00:02:14.683 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-17 00:02:14.713227 | orchestrator | 00:02:14.713 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-17 00:02:14.718639 | orchestrator | 00:02:14.718 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-17 00:02:19.995489 | orchestrator | 00:02:19.995 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=3343d1b7-fc00-46fa-9fbc-25feaed27a7d/bec56d32-b1fb-48c0-a20f-a6daa2f9686d] 2025-05-17 00:02:20.004967 | orchestrator | 00:02:19.999 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=23ffb604-bc7f-49b0-8701-2be56ed9c23e/c9243530-1d89-4c38-b4ef-a9d7ed453cca] 2025-05-17 00:02:20.025057 | orchestrator | 00:02:20.024 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=23ffb604-bc7f-49b0-8701-2be56ed9c23e/8746963d-35d6-4275-a53f-fa471798b09a] 2025-05-17 00:02:20.028650 | orchestrator | 00:02:20.028 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=3343d1b7-fc00-46fa-9fbc-25feaed27a7d/e3068b10-d912-449c-8868-8ffe0bc578f0] 2025-05-17 00:02:20.049126 | orchestrator | 00:02:20.048 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=27381752-0274-4a11-81e5-6c8ee0106f16/6120ef73-2521-4d83-8ac9-34a2289f978b] 2025-05-17 00:02:20.062560 | orchestrator | 00:02:20.061 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=23ffb604-bc7f-49b0-8701-2be56ed9c23e/4ddb2821-e209-41e3-b031-9f23c5adf4cf] 2025-05-17 00:02:20.082629 | orchestrator | 00:02:20.081 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=3343d1b7-fc00-46fa-9fbc-25feaed27a7d/6fc6848d-5127-4f65-b412-e829995e25e7] 2025-05-17 00:02:20.107786 | orchestrator | 00:02:20.107 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=27381752-0274-4a11-81e5-6c8ee0106f16/4c541808-fecb-473a-bfa6-e6107b1a17c0] 2025-05-17 00:02:20.238405 | orchestrator | 00:02:20.237 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=27381752-0274-4a11-81e5-6c8ee0106f16/0e5716a4-9f06-4595-a8e5-44869be2d3e3] 2025-05-17 00:02:24.715420 | orchestrator | 00:02:24.715 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-17 00:02:34.716334 | orchestrator | 00:02:34.715 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-17 00:02:35.085839 | orchestrator | 00:02:35.085 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=14c5e416-304d-4c79-9781-701a5178ce31] 2025-05-17 00:02:35.110987 | orchestrator | 00:02:35.110 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-17 00:02:35.111085 | orchestrator | 00:02:35.110 STDOUT terraform: Outputs: 2025-05-17 00:02:35.111098 | orchestrator | 00:02:35.110 STDOUT terraform: manager_address = 2025-05-17 00:02:35.111110 | orchestrator | 00:02:35.111 STDOUT terraform: private_key = 2025-05-17 00:02:35.618442 | orchestrator | ok: Runtime: 0:01:34.540135 2025-05-17 00:02:35.660412 | 2025-05-17 00:02:35.660594 | TASK [Fetch manager address] 2025-05-17 00:02:36.146465 | orchestrator | ok 2025-05-17 00:02:36.156721 | 2025-05-17 00:02:36.156856 | TASK [Set manager_host address] 2025-05-17 00:02:36.265917 | orchestrator | ok 2025-05-17 00:02:36.285625 | 2025-05-17 00:02:36.285814 | LOOP [Update ansible collections] 2025-05-17 00:02:37.414747 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-17 00:02:37.415266 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-17 00:02:37.415320 | orchestrator | Starting galaxy collection install process 2025-05-17 00:02:37.415351 | orchestrator | Process install dependency map 2025-05-17 00:02:37.415379 | orchestrator | Starting collection install process 2025-05-17 00:02:37.415404 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2025-05-17 00:02:37.415436 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2025-05-17 00:02:37.415467 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-17 00:02:37.415531 | orchestrator | ok: Item: commons Runtime: 0:00:00.800239 2025-05-17 00:02:38.247711 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-17 00:02:38.247926 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-17 00:02:38.248003 | orchestrator | Starting galaxy collection install process 2025-05-17 00:02:38.248093 | orchestrator | Process install dependency map 2025-05-17 00:02:38.248153 | orchestrator | Starting collection install process 2025-05-17 00:02:38.248197 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2025-05-17 00:02:38.248243 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2025-05-17 00:02:38.248286 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-17 00:02:38.248350 | orchestrator | ok: Item: services Runtime: 0:00:00.556754 2025-05-17 00:02:38.274465 | 2025-05-17 00:02:38.274671 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-17 00:02:48.857501 | orchestrator | ok 2025-05-17 00:02:48.871104 | 2025-05-17 00:02:48.871275 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-17 00:03:48.913049 | orchestrator | ok 2025-05-17 00:03:48.923265 | 2025-05-17 00:03:48.923386 | TASK [Fetch manager ssh hostkey] 2025-05-17 00:03:50.502485 | orchestrator | Output suppressed because no_log was given 2025-05-17 00:03:50.523285 | 2025-05-17 00:03:50.523527 | TASK [Get ssh keypair from terraform environment] 2025-05-17 00:03:51.063714 | orchestrator | ok: Runtime: 0:00:00.010041 2025-05-17 00:03:51.079631 | 2025-05-17 00:03:51.079801 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-17 00:03:51.130308 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-17 00:03:51.140626 | 2025-05-17 00:03:51.140776 | TASK [Run manager part 0] 2025-05-17 00:03:52.593072 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-17 00:03:52.740150 | orchestrator | 2025-05-17 00:03:52.740207 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-17 00:03:52.740218 | orchestrator | 2025-05-17 00:03:52.740237 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-17 00:03:54.497047 | orchestrator | ok: [testbed-manager] 2025-05-17 00:03:54.497104 | orchestrator | 2025-05-17 00:03:54.497125 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-17 00:03:54.497134 | orchestrator | 2025-05-17 00:03:54.497144 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-17 00:03:56.411531 | orchestrator | ok: [testbed-manager] 2025-05-17 00:03:56.411695 | orchestrator | 2025-05-17 00:03:56.411715 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-17 00:03:57.097498 | orchestrator | ok: [testbed-manager] 2025-05-17 00:03:57.097593 | orchestrator | 2025-05-17 00:03:57.097607 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-17 00:03:57.142741 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:03:57.142791 | orchestrator | 2025-05-17 00:03:57.142801 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-17 00:03:57.166227 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:03:57.166268 | orchestrator | 2025-05-17 00:03:57.166275 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-17 00:03:57.195930 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:03:57.195974 | orchestrator | 2025-05-17 00:03:57.195980 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-17 00:03:57.226038 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:03:57.226082 | orchestrator | 2025-05-17 00:03:57.226088 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-17 00:03:57.263765 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:03:57.263837 | orchestrator | 2025-05-17 00:03:57.263851 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-17 00:03:57.301543 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:03:57.301596 | orchestrator | 2025-05-17 00:03:57.301607 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-17 00:03:57.334167 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:03:57.334212 | orchestrator | 2025-05-17 00:03:57.334221 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-17 00:03:58.178034 | orchestrator | changed: [testbed-manager] 2025-05-17 00:03:58.178121 | orchestrator | 2025-05-17 00:03:58.178135 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-17 00:06:59.438114 | orchestrator | changed: [testbed-manager] 2025-05-17 00:06:59.438972 | orchestrator | 2025-05-17 00:06:59.439007 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-17 00:08:25.467558 | orchestrator | changed: [testbed-manager] 2025-05-17 00:08:25.467699 | orchestrator | 2025-05-17 00:08:25.467717 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-17 00:08:50.000989 | orchestrator | changed: [testbed-manager] 2025-05-17 00:08:50.001103 | orchestrator | 2025-05-17 00:08:50.001123 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-17 00:08:59.124546 | orchestrator | changed: [testbed-manager] 2025-05-17 00:08:59.124700 | orchestrator | 2025-05-17 00:08:59.124718 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-17 00:08:59.170904 | orchestrator | ok: [testbed-manager] 2025-05-17 00:08:59.170933 | orchestrator | 2025-05-17 00:08:59.170941 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-17 00:08:59.975389 | orchestrator | ok: [testbed-manager] 2025-05-17 00:08:59.975510 | orchestrator | 2025-05-17 00:08:59.975542 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-17 00:09:00.720095 | orchestrator | changed: [testbed-manager] 2025-05-17 00:09:00.720193 | orchestrator | 2025-05-17 00:09:00.720211 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-17 00:09:06.685431 | orchestrator | changed: [testbed-manager] 2025-05-17 00:09:06.685473 | orchestrator | 2025-05-17 00:09:06.685496 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-17 00:09:12.560447 | orchestrator | changed: [testbed-manager] 2025-05-17 00:09:12.561017 | orchestrator | 2025-05-17 00:09:12.561043 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-17 00:09:15.144589 | orchestrator | changed: [testbed-manager] 2025-05-17 00:09:15.144699 | orchestrator | 2025-05-17 00:09:15.144711 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-17 00:09:16.909371 | orchestrator | changed: [testbed-manager] 2025-05-17 00:09:16.909453 | orchestrator | 2025-05-17 00:09:16.909466 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-17 00:09:18.043725 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-17 00:09:18.043812 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-17 00:09:18.043822 | orchestrator | 2025-05-17 00:09:18.043830 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-17 00:09:18.086315 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-17 00:09:18.086394 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-17 00:09:18.086409 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-17 00:09:18.086421 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-17 00:09:21.186155 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-17 00:09:21.186261 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-17 00:09:21.186274 | orchestrator | 2025-05-17 00:09:21.186284 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-17 00:09:21.751955 | orchestrator | changed: [testbed-manager] 2025-05-17 00:09:21.752050 | orchestrator | 2025-05-17 00:09:21.752066 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-17 00:11:42.674664 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-17 00:11:42.674730 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-17 00:11:42.674742 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-17 00:11:42.674750 | orchestrator | 2025-05-17 00:11:42.674759 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-17 00:11:44.953320 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-17 00:11:44.953413 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-17 00:11:44.953429 | orchestrator | 2025-05-17 00:11:44.953443 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-17 00:11:44.953455 | orchestrator | 2025-05-17 00:11:44.953467 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-17 00:11:46.379885 | orchestrator | ok: [testbed-manager] 2025-05-17 00:11:46.379993 | orchestrator | 2025-05-17 00:11:46.380012 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-17 00:11:46.430079 | orchestrator | ok: [testbed-manager] 2025-05-17 00:11:46.430138 | orchestrator | 2025-05-17 00:11:46.430145 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-17 00:11:46.498854 | orchestrator | ok: [testbed-manager] 2025-05-17 00:11:46.498909 | orchestrator | 2025-05-17 00:11:46.498915 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-17 00:11:47.830614 | orchestrator | changed: [testbed-manager] 2025-05-17 00:11:47.831358 | orchestrator | 2025-05-17 00:11:47.831385 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-17 00:11:48.553404 | orchestrator | changed: [testbed-manager] 2025-05-17 00:11:48.553498 | orchestrator | 2025-05-17 00:11:48.553510 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-17 00:11:49.933306 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-17 00:11:49.933547 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-17 00:11:49.933562 | orchestrator | 2025-05-17 00:11:49.933623 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-17 00:11:51.323441 | orchestrator | changed: [testbed-manager] 2025-05-17 00:11:51.323519 | orchestrator | 2025-05-17 00:11:51.323531 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-17 00:11:53.087255 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-17 00:11:53.088104 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-17 00:11:53.088128 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-17 00:11:53.088141 | orchestrator | 2025-05-17 00:11:53.088156 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-17 00:11:53.670696 | orchestrator | changed: [testbed-manager] 2025-05-17 00:11:53.670777 | orchestrator | 2025-05-17 00:11:53.670792 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-17 00:11:53.740652 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:11:53.740727 | orchestrator | 2025-05-17 00:11:53.740741 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-17 00:11:54.601304 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-17 00:11:54.601365 | orchestrator | changed: [testbed-manager] 2025-05-17 00:11:54.601376 | orchestrator | 2025-05-17 00:11:54.601384 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-17 00:11:54.644478 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:11:54.644533 | orchestrator | 2025-05-17 00:11:54.644543 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-17 00:11:54.686949 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:11:54.687004 | orchestrator | 2025-05-17 00:11:54.687013 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-17 00:11:54.728986 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:11:54.729041 | orchestrator | 2025-05-17 00:11:54.729049 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-17 00:11:54.787085 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:11:54.787154 | orchestrator | 2025-05-17 00:11:54.787168 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-17 00:11:55.498506 | orchestrator | ok: [testbed-manager] 2025-05-17 00:11:55.498554 | orchestrator | 2025-05-17 00:11:55.498561 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-17 00:11:55.498566 | orchestrator | 2025-05-17 00:11:55.498573 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-17 00:11:56.945692 | orchestrator | ok: [testbed-manager] 2025-05-17 00:11:56.945741 | orchestrator | 2025-05-17 00:11:56.945747 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-17 00:11:57.895954 | orchestrator | changed: [testbed-manager] 2025-05-17 00:11:57.896074 | orchestrator | 2025-05-17 00:11:57.896090 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:11:57.896103 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-17 00:11:57.896114 | orchestrator | 2025-05-17 00:11:58.043886 | orchestrator | ok: Runtime: 0:08:06.565984 2025-05-17 00:11:58.054421 | 2025-05-17 00:11:58.054566 | TASK [Point out that the log in on the manager is now possible] 2025-05-17 00:11:58.100917 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-17 00:11:58.110637 | 2025-05-17 00:11:58.110780 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-17 00:11:58.147216 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-17 00:11:58.157772 | 2025-05-17 00:11:58.158120 | TASK [Run manager part 1 + 2] 2025-05-17 00:11:59.015895 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-17 00:11:59.076671 | orchestrator | 2025-05-17 00:11:59.076740 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-17 00:11:59.076747 | orchestrator | 2025-05-17 00:11:59.076762 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-17 00:12:01.944913 | orchestrator | ok: [testbed-manager] 2025-05-17 00:12:01.944982 | orchestrator | 2025-05-17 00:12:01.945005 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-17 00:12:01.987783 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:12:01.987849 | orchestrator | 2025-05-17 00:12:01.987863 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-17 00:12:02.035556 | orchestrator | ok: [testbed-manager] 2025-05-17 00:12:02.035670 | orchestrator | 2025-05-17 00:12:02.035684 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-17 00:12:02.084526 | orchestrator | ok: [testbed-manager] 2025-05-17 00:12:02.084606 | orchestrator | 2025-05-17 00:12:02.084618 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-17 00:12:02.153849 | orchestrator | ok: [testbed-manager] 2025-05-17 00:12:02.153907 | orchestrator | 2025-05-17 00:12:02.153918 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-17 00:12:02.214542 | orchestrator | ok: [testbed-manager] 2025-05-17 00:12:02.214614 | orchestrator | 2025-05-17 00:12:02.214626 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-17 00:12:02.266487 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-17 00:12:02.266527 | orchestrator | 2025-05-17 00:12:02.266533 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-17 00:12:02.976386 | orchestrator | ok: [testbed-manager] 2025-05-17 00:12:02.976467 | orchestrator | 2025-05-17 00:12:02.976479 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-17 00:12:03.028343 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:12:03.028410 | orchestrator | 2025-05-17 00:12:03.028420 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-17 00:12:04.437139 | orchestrator | changed: [testbed-manager] 2025-05-17 00:12:04.437226 | orchestrator | 2025-05-17 00:12:04.437239 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-17 00:12:05.003714 | orchestrator | ok: [testbed-manager] 2025-05-17 00:12:05.003790 | orchestrator | 2025-05-17 00:12:05.003798 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-17 00:12:06.162368 | orchestrator | changed: [testbed-manager] 2025-05-17 00:12:06.162428 | orchestrator | 2025-05-17 00:12:06.162441 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-17 00:12:18.805296 | orchestrator | changed: [testbed-manager] 2025-05-17 00:12:18.805415 | orchestrator | 2025-05-17 00:12:18.805432 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-17 00:12:19.484946 | orchestrator | ok: [testbed-manager] 2025-05-17 00:12:19.484987 | orchestrator | 2025-05-17 00:12:19.484998 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-17 00:12:19.542348 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:12:19.542424 | orchestrator | 2025-05-17 00:12:19.542439 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-17 00:12:20.479685 | orchestrator | changed: [testbed-manager] 2025-05-17 00:12:20.479728 | orchestrator | 2025-05-17 00:12:20.479757 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-17 00:12:21.419665 | orchestrator | changed: [testbed-manager] 2025-05-17 00:12:21.419710 | orchestrator | 2025-05-17 00:12:21.419719 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-17 00:12:21.985453 | orchestrator | changed: [testbed-manager] 2025-05-17 00:12:21.985505 | orchestrator | 2025-05-17 00:12:21.985516 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-17 00:12:22.025609 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-17 00:12:22.025678 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-17 00:12:22.025684 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-17 00:12:22.025689 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-17 00:12:24.288091 | orchestrator | changed: [testbed-manager] 2025-05-17 00:12:24.288192 | orchestrator | 2025-05-17 00:12:24.288210 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-17 00:12:33.114969 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-17 00:12:33.115069 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-17 00:12:33.115087 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-17 00:12:33.115099 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-17 00:12:33.115110 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-17 00:12:33.115121 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-17 00:12:33.115132 | orchestrator | 2025-05-17 00:12:33.115145 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-17 00:12:34.158807 | orchestrator | changed: [testbed-manager] 2025-05-17 00:12:34.159419 | orchestrator | 2025-05-17 00:12:34.159436 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-17 00:12:34.205450 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:12:34.205534 | orchestrator | 2025-05-17 00:12:34.205549 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-17 00:12:37.301420 | orchestrator | changed: [testbed-manager] 2025-05-17 00:12:37.301521 | orchestrator | 2025-05-17 00:12:37.301539 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-17 00:12:37.344414 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:12:37.344468 | orchestrator | 2025-05-17 00:12:37.344477 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-17 00:14:12.822341 | orchestrator | changed: [testbed-manager] 2025-05-17 00:14:12.822430 | orchestrator | 2025-05-17 00:14:12.822450 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-17 00:14:13.925191 | orchestrator | ok: [testbed-manager] 2025-05-17 00:14:13.925241 | orchestrator | 2025-05-17 00:14:13.925250 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:14:13.925257 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-17 00:14:13.925262 | orchestrator | 2025-05-17 00:14:14.272444 | orchestrator | ok: Runtime: 0:02:15.573877 2025-05-17 00:14:14.281491 | 2025-05-17 00:14:14.281615 | TASK [Reboot manager] 2025-05-17 00:14:15.815455 | orchestrator | ok: Runtime: 0:00:00.951818 2025-05-17 00:14:15.823902 | 2025-05-17 00:14:15.824053 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-17 00:14:30.352944 | orchestrator | ok 2025-05-17 00:14:30.363662 | 2025-05-17 00:14:30.363801 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-17 00:15:30.408813 | orchestrator | ok 2025-05-17 00:15:30.419163 | 2025-05-17 00:15:30.419304 | TASK [Deploy manager + bootstrap nodes] 2025-05-17 00:15:32.924539 | orchestrator | 2025-05-17 00:15:32.924810 | orchestrator | # DEPLOY MANAGER 2025-05-17 00:15:32.924831 | orchestrator | 2025-05-17 00:15:32.924844 | orchestrator | + set -e 2025-05-17 00:15:32.924855 | orchestrator | + echo 2025-05-17 00:15:32.924868 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-17 00:15:32.924880 | orchestrator | + echo 2025-05-17 00:15:32.924932 | orchestrator | + cat /opt/manager-vars.sh 2025-05-17 00:15:32.928132 | orchestrator | export NUMBER_OF_NODES=6 2025-05-17 00:15:32.928155 | orchestrator | 2025-05-17 00:15:32.928166 | orchestrator | export CEPH_VERSION=reef 2025-05-17 00:15:32.928178 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-17 00:15:32.928190 | orchestrator | export MANAGER_VERSION=8.1.0 2025-05-17 00:15:32.928210 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-17 00:15:32.928220 | orchestrator | 2025-05-17 00:15:32.928237 | orchestrator | export ARA=false 2025-05-17 00:15:32.928248 | orchestrator | export TEMPEST=false 2025-05-17 00:15:32.928264 | orchestrator | export IS_ZUUL=true 2025-05-17 00:15:32.928275 | orchestrator | 2025-05-17 00:15:32.928291 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2025-05-17 00:15:32.928302 | orchestrator | export EXTERNAL_API=false 2025-05-17 00:15:32.928312 | orchestrator | 2025-05-17 00:15:32.928331 | orchestrator | export IMAGE_USER=ubuntu 2025-05-17 00:15:32.928341 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-17 00:15:32.928350 | orchestrator | 2025-05-17 00:15:32.928363 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-17 00:15:32.928378 | orchestrator | 2025-05-17 00:15:32.928388 | orchestrator | + echo 2025-05-17 00:15:32.928398 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-17 00:15:32.929074 | orchestrator | ++ export INTERACTIVE=false 2025-05-17 00:15:32.929090 | orchestrator | ++ INTERACTIVE=false 2025-05-17 00:15:32.929101 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-17 00:15:32.929111 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-17 00:15:32.929294 | orchestrator | + source /opt/manager-vars.sh 2025-05-17 00:15:32.929309 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-17 00:15:32.929320 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-17 00:15:32.929329 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-17 00:15:32.929339 | orchestrator | ++ CEPH_VERSION=reef 2025-05-17 00:15:32.929438 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-17 00:15:32.929452 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-17 00:15:32.929462 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-17 00:15:32.929472 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-17 00:15:32.929482 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-17 00:15:32.929491 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-17 00:15:32.929501 | orchestrator | ++ export ARA=false 2025-05-17 00:15:32.929511 | orchestrator | ++ ARA=false 2025-05-17 00:15:32.929529 | orchestrator | ++ export TEMPEST=false 2025-05-17 00:15:32.929538 | orchestrator | ++ TEMPEST=false 2025-05-17 00:15:32.929548 | orchestrator | ++ export IS_ZUUL=true 2025-05-17 00:15:32.929557 | orchestrator | ++ IS_ZUUL=true 2025-05-17 00:15:32.929567 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2025-05-17 00:15:32.929577 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2025-05-17 00:15:32.929586 | orchestrator | ++ export EXTERNAL_API=false 2025-05-17 00:15:32.929596 | orchestrator | ++ EXTERNAL_API=false 2025-05-17 00:15:32.929605 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-17 00:15:32.929641 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-17 00:15:32.929652 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-17 00:15:32.929662 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-17 00:15:32.929672 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-17 00:15:32.929682 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-17 00:15:32.929692 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-17 00:15:32.986860 | orchestrator | + docker version 2025-05-17 00:15:33.242110 | orchestrator | Client: Docker Engine - Community 2025-05-17 00:15:33.242258 | orchestrator | Version: 26.1.4 2025-05-17 00:15:33.242277 | orchestrator | API version: 1.45 2025-05-17 00:15:33.242288 | orchestrator | Go version: go1.21.11 2025-05-17 00:15:33.242298 | orchestrator | Git commit: 5650f9b 2025-05-17 00:15:33.242308 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-17 00:15:33.242320 | orchestrator | OS/Arch: linux/amd64 2025-05-17 00:15:33.242400 | orchestrator | Context: default 2025-05-17 00:15:33.242412 | orchestrator | 2025-05-17 00:15:33.242423 | orchestrator | Server: Docker Engine - Community 2025-05-17 00:15:33.242433 | orchestrator | Engine: 2025-05-17 00:15:33.242443 | orchestrator | Version: 26.1.4 2025-05-17 00:15:33.242454 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-05-17 00:15:33.242464 | orchestrator | Go version: go1.21.11 2025-05-17 00:15:33.242474 | orchestrator | Git commit: de5c9cf 2025-05-17 00:15:33.242513 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-17 00:15:33.242523 | orchestrator | OS/Arch: linux/amd64 2025-05-17 00:15:33.242533 | orchestrator | Experimental: false 2025-05-17 00:15:33.242543 | orchestrator | containerd: 2025-05-17 00:15:33.242553 | orchestrator | Version: 1.7.27 2025-05-17 00:15:33.242563 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-17 00:15:33.242573 | orchestrator | runc: 2025-05-17 00:15:33.242582 | orchestrator | Version: 1.2.5 2025-05-17 00:15:33.242592 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-17 00:15:33.242602 | orchestrator | docker-init: 2025-05-17 00:15:33.242611 | orchestrator | Version: 0.19.0 2025-05-17 00:15:33.242652 | orchestrator | GitCommit: de40ad0 2025-05-17 00:15:33.245374 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-17 00:15:33.255146 | orchestrator | + set -e 2025-05-17 00:15:33.255192 | orchestrator | + source /opt/manager-vars.sh 2025-05-17 00:15:33.255206 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-17 00:15:33.255218 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-17 00:15:33.255229 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-17 00:15:33.255238 | orchestrator | ++ CEPH_VERSION=reef 2025-05-17 00:15:33.255249 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-17 00:15:33.255261 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-17 00:15:33.255270 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-17 00:15:33.255280 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-17 00:15:33.255290 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-17 00:15:33.255300 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-17 00:15:33.255309 | orchestrator | ++ export ARA=false 2025-05-17 00:15:33.255319 | orchestrator | ++ ARA=false 2025-05-17 00:15:33.255329 | orchestrator | ++ export TEMPEST=false 2025-05-17 00:15:33.255338 | orchestrator | ++ TEMPEST=false 2025-05-17 00:15:33.255348 | orchestrator | ++ export IS_ZUUL=true 2025-05-17 00:15:33.255357 | orchestrator | ++ IS_ZUUL=true 2025-05-17 00:15:33.255367 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2025-05-17 00:15:33.255377 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2025-05-17 00:15:33.255386 | orchestrator | ++ export EXTERNAL_API=false 2025-05-17 00:15:33.255396 | orchestrator | ++ EXTERNAL_API=false 2025-05-17 00:15:33.255405 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-17 00:15:33.255415 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-17 00:15:33.255424 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-17 00:15:33.255434 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-17 00:15:33.255444 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-17 00:15:33.255453 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-17 00:15:33.255463 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-17 00:15:33.255473 | orchestrator | ++ export INTERACTIVE=false 2025-05-17 00:15:33.255482 | orchestrator | ++ INTERACTIVE=false 2025-05-17 00:15:33.255491 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-17 00:15:33.255501 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-17 00:15:33.255518 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-17 00:15:33.255528 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-05-17 00:15:33.262760 | orchestrator | + set -e 2025-05-17 00:15:33.262827 | orchestrator | + VERSION=8.1.0 2025-05-17 00:15:33.262855 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-05-17 00:15:33.270370 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-17 00:15:33.270448 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-17 00:15:33.274400 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-17 00:15:33.277986 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-05-17 00:15:33.284594 | orchestrator | /opt/configuration ~ 2025-05-17 00:15:33.284665 | orchestrator | + set -e 2025-05-17 00:15:33.284681 | orchestrator | + pushd /opt/configuration 2025-05-17 00:15:33.284693 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-17 00:15:33.285836 | orchestrator | + source /opt/venv/bin/activate 2025-05-17 00:15:33.287042 | orchestrator | ++ deactivate nondestructive 2025-05-17 00:15:33.287068 | orchestrator | ++ '[' -n '' ']' 2025-05-17 00:15:33.287177 | orchestrator | ++ '[' -n '' ']' 2025-05-17 00:15:33.287193 | orchestrator | ++ hash -r 2025-05-17 00:15:33.287309 | orchestrator | ++ '[' -n '' ']' 2025-05-17 00:15:33.287324 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-17 00:15:33.287474 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-17 00:15:33.287489 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-17 00:15:33.287759 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-17 00:15:33.287802 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-17 00:15:33.287815 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-17 00:15:33.287826 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-17 00:15:33.287839 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-17 00:15:33.287856 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-17 00:15:33.287867 | orchestrator | ++ export PATH 2025-05-17 00:15:33.287878 | orchestrator | ++ '[' -n '' ']' 2025-05-17 00:15:33.288046 | orchestrator | ++ '[' -z '' ']' 2025-05-17 00:15:33.288060 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-17 00:15:33.288071 | orchestrator | ++ PS1='(venv) ' 2025-05-17 00:15:33.288082 | orchestrator | ++ export PS1 2025-05-17 00:15:33.288093 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-17 00:15:33.288108 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-17 00:15:33.288119 | orchestrator | ++ hash -r 2025-05-17 00:15:33.288268 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-05-17 00:15:34.282590 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-05-17 00:15:34.283066 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-05-17 00:15:34.284590 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-05-17 00:15:34.285969 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-05-17 00:15:34.287032 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-05-17 00:15:34.297220 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.0) 2025-05-17 00:15:34.298681 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-05-17 00:15:34.299792 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-05-17 00:15:34.301240 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-05-17 00:15:34.332954 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-05-17 00:15:34.334281 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-05-17 00:15:34.336041 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-05-17 00:15:34.337494 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-05-17 00:15:34.341655 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-05-17 00:15:34.546304 | orchestrator | ++ which gilt 2025-05-17 00:15:34.549721 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-05-17 00:15:34.549747 | orchestrator | + /opt/venv/bin/gilt overlay 2025-05-17 00:15:34.769937 | orchestrator | osism.cfg-generics: 2025-05-17 00:15:34.770161 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-05-17 00:15:36.244994 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-05-17 00:15:36.245141 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-05-17 00:15:36.245311 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-05-17 00:15:36.245334 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-05-17 00:15:37.221215 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-05-17 00:15:37.231135 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-05-17 00:15:37.718720 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-05-17 00:15:37.766155 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-17 00:15:37.766255 | orchestrator | + deactivate 2025-05-17 00:15:37.766269 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-17 00:15:37.766282 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-17 00:15:37.766292 | orchestrator | + export PATH 2025-05-17 00:15:37.766302 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-17 00:15:37.766312 | orchestrator | + '[' -n '' ']' 2025-05-17 00:15:37.766321 | orchestrator | + hash -r 2025-05-17 00:15:37.766331 | orchestrator | + '[' -n '' ']' 2025-05-17 00:15:37.766340 | orchestrator | + unset VIRTUAL_ENV 2025-05-17 00:15:37.766350 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-17 00:15:37.766371 | orchestrator | ~ 2025-05-17 00:15:37.766382 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-17 00:15:37.766392 | orchestrator | + unset -f deactivate 2025-05-17 00:15:37.766401 | orchestrator | + popd 2025-05-17 00:15:37.768079 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-17 00:15:37.768095 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-17 00:15:37.769261 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-17 00:15:37.817330 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-17 00:15:37.817369 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-17 00:15:37.817386 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-17 00:15:37.850798 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-17 00:15:37.850841 | orchestrator | + source /opt/venv/bin/activate 2025-05-17 00:15:37.850855 | orchestrator | ++ deactivate nondestructive 2025-05-17 00:15:37.850866 | orchestrator | ++ '[' -n '' ']' 2025-05-17 00:15:37.850877 | orchestrator | ++ '[' -n '' ']' 2025-05-17 00:15:37.850887 | orchestrator | ++ hash -r 2025-05-17 00:15:37.850897 | orchestrator | ++ '[' -n '' ']' 2025-05-17 00:15:37.850907 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-17 00:15:37.850916 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-17 00:15:37.850926 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-17 00:15:37.850936 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-17 00:15:37.850946 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-17 00:15:37.850956 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-17 00:15:37.850966 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-17 00:15:37.850983 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-17 00:15:37.850994 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-17 00:15:37.851004 | orchestrator | ++ export PATH 2025-05-17 00:15:37.851014 | orchestrator | ++ '[' -n '' ']' 2025-05-17 00:15:37.851023 | orchestrator | ++ '[' -z '' ']' 2025-05-17 00:15:37.851033 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-17 00:15:37.851043 | orchestrator | ++ PS1='(venv) ' 2025-05-17 00:15:37.851053 | orchestrator | ++ export PS1 2025-05-17 00:15:37.851062 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-17 00:15:37.851072 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-17 00:15:37.851081 | orchestrator | ++ hash -r 2025-05-17 00:15:37.851091 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-17 00:15:39.039840 | orchestrator | 2025-05-17 00:15:39.039989 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-17 00:15:39.040008 | orchestrator | 2025-05-17 00:15:39.040021 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-17 00:15:39.593489 | orchestrator | ok: [testbed-manager] 2025-05-17 00:15:39.593589 | orchestrator | 2025-05-17 00:15:39.593605 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-17 00:15:40.557916 | orchestrator | changed: [testbed-manager] 2025-05-17 00:15:40.558085 | orchestrator | 2025-05-17 00:15:40.558105 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-17 00:15:40.558118 | orchestrator | 2025-05-17 00:15:40.558130 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-17 00:15:42.914671 | orchestrator | ok: [testbed-manager] 2025-05-17 00:15:42.914783 | orchestrator | 2025-05-17 00:15:42.914798 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-17 00:15:47.899548 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-17 00:15:47.899746 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.6.2) 2025-05-17 00:15:47.899767 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-05-17 00:15:47.899779 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-05-17 00:15:47.899790 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-05-17 00:15:47.899805 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.1-alpine) 2025-05-17 00:15:47.899818 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-05-17 00:15:47.899832 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-05-17 00:15:47.899843 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-05-17 00:15:47.899854 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.6-alpine) 2025-05-17 00:15:47.899865 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.2.1) 2025-05-17 00:15:47.899876 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.18.2) 2025-05-17 00:15:47.899887 | orchestrator | 2025-05-17 00:15:47.899898 | orchestrator | TASK [Check status] ************************************************************ 2025-05-17 00:17:04.025914 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-17 00:17:04.026129 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-17 00:17:04.026148 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-17 00:17:04.026160 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-17 00:17:04.026187 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j844565467422.1590', 'results_file': '/home/dragon/.ansible_async/j844565467422.1590', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-17 00:17:04.026208 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j784535840915.1615', 'results_file': '/home/dragon/.ansible_async/j784535840915.1615', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-05-17 00:17:04.026225 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-17 00:17:04.026237 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j432672472416.1640', 'results_file': '/home/dragon/.ansible_async/j432672472416.1640', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-17 00:17:04.026249 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j197980019743.1672', 'results_file': '/home/dragon/.ansible_async/j197980019743.1672', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-17 00:17:04.026260 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-17 00:17:04.026283 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j371354252577.1704', 'results_file': '/home/dragon/.ansible_async/j371354252577.1704', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-17 00:17:04.026296 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j448265925395.1736', 'results_file': '/home/dragon/.ansible_async/j448265925395.1736', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-05-17 00:17:04.026308 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-17 00:17:04.026361 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j59837672738.1768', 'results_file': '/home/dragon/.ansible_async/j59837672738.1768', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-05-17 00:17:04.026374 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j308130718402.1807', 'results_file': '/home/dragon/.ansible_async/j308130718402.1807', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-17 00:17:04.026386 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j294519908128.1840', 'results_file': '/home/dragon/.ansible_async/j294519908128.1840', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-05-17 00:17:04.026398 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j412722656187.1866', 'results_file': '/home/dragon/.ansible_async/j412722656187.1866', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-05-17 00:17:04.026410 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j8415799265.1898', 'results_file': '/home/dragon/.ansible_async/j8415799265.1898', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-05-17 00:17:04.026424 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j536752622596.1930', 'results_file': '/home/dragon/.ansible_async/j536752622596.1930', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-05-17 00:17:04.026439 | orchestrator | 2025-05-17 00:17:04.026453 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-17 00:17:04.074666 | orchestrator | ok: [testbed-manager] 2025-05-17 00:17:04.074781 | orchestrator | 2025-05-17 00:17:04.074800 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-17 00:17:04.551703 | orchestrator | changed: [testbed-manager] 2025-05-17 00:17:04.551801 | orchestrator | 2025-05-17 00:17:04.551816 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-17 00:17:04.896272 | orchestrator | changed: [testbed-manager] 2025-05-17 00:17:04.896374 | orchestrator | 2025-05-17 00:17:04.896389 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-17 00:17:05.251415 | orchestrator | changed: [testbed-manager] 2025-05-17 00:17:05.251509 | orchestrator | 2025-05-17 00:17:05.251525 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-17 00:17:05.322257 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:17:05.322333 | orchestrator | 2025-05-17 00:17:05.322341 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-17 00:17:05.671185 | orchestrator | ok: [testbed-manager] 2025-05-17 00:17:05.671285 | orchestrator | 2025-05-17 00:17:05.671301 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-17 00:17:05.790937 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:17:05.791030 | orchestrator | 2025-05-17 00:17:05.791042 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-17 00:17:05.791052 | orchestrator | 2025-05-17 00:17:05.791060 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-17 00:17:07.642626 | orchestrator | ok: [testbed-manager] 2025-05-17 00:17:07.642810 | orchestrator | 2025-05-17 00:17:07.642830 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-17 00:17:07.737143 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-17 00:17:07.737255 | orchestrator | 2025-05-17 00:17:07.737270 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-17 00:17:07.785414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-17 00:17:07.785556 | orchestrator | 2025-05-17 00:17:07.785586 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-17 00:17:08.878949 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-17 00:17:08.879051 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-17 00:17:08.879065 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-17 00:17:08.879076 | orchestrator | 2025-05-17 00:17:08.879088 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-17 00:17:10.692603 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-17 00:17:10.692769 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-17 00:17:10.692785 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-17 00:17:10.692798 | orchestrator | 2025-05-17 00:17:10.692811 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-17 00:17:11.357054 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-17 00:17:11.357160 | orchestrator | changed: [testbed-manager] 2025-05-17 00:17:11.357177 | orchestrator | 2025-05-17 00:17:11.357211 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-17 00:17:11.987494 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-17 00:17:11.987596 | orchestrator | changed: [testbed-manager] 2025-05-17 00:17:11.987611 | orchestrator | 2025-05-17 00:17:11.987623 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-17 00:17:12.031171 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:17:12.031237 | orchestrator | 2025-05-17 00:17:12.031251 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-17 00:17:12.374089 | orchestrator | ok: [testbed-manager] 2025-05-17 00:17:12.374188 | orchestrator | 2025-05-17 00:17:12.374203 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-17 00:17:12.435498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-17 00:17:12.435562 | orchestrator | 2025-05-17 00:17:12.435575 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-17 00:17:13.434976 | orchestrator | changed: [testbed-manager] 2025-05-17 00:17:13.435104 | orchestrator | 2025-05-17 00:17:13.435121 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-17 00:17:14.234165 | orchestrator | changed: [testbed-manager] 2025-05-17 00:17:14.234267 | orchestrator | 2025-05-17 00:17:14.234283 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-17 00:17:17.451615 | orchestrator | changed: [testbed-manager] 2025-05-17 00:17:17.451779 | orchestrator | 2025-05-17 00:17:17.451796 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-17 00:17:17.581734 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-17 00:17:17.581824 | orchestrator | 2025-05-17 00:17:17.581837 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-17 00:17:17.654799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-17 00:17:17.654874 | orchestrator | 2025-05-17 00:17:17.654888 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-17 00:17:20.164985 | orchestrator | ok: [testbed-manager] 2025-05-17 00:17:20.165097 | orchestrator | 2025-05-17 00:17:20.165114 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-17 00:17:20.261967 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-17 00:17:20.262115 | orchestrator | 2025-05-17 00:17:20.262131 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-17 00:17:21.375188 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-17 00:17:21.375298 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-17 00:17:21.375320 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-17 00:17:21.375379 | orchestrator | 2025-05-17 00:17:21.375399 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-17 00:17:21.439689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-17 00:17:21.439757 | orchestrator | 2025-05-17 00:17:21.439770 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-17 00:17:22.080922 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-17 00:17:22.081010 | orchestrator | 2025-05-17 00:17:22.081022 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-17 00:17:22.709388 | orchestrator | changed: [testbed-manager] 2025-05-17 00:17:22.709499 | orchestrator | 2025-05-17 00:17:22.709516 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-17 00:17:23.334331 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-17 00:17:23.334437 | orchestrator | changed: [testbed-manager] 2025-05-17 00:17:23.334452 | orchestrator | 2025-05-17 00:17:23.334464 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-17 00:17:23.755760 | orchestrator | changed: [testbed-manager] 2025-05-17 00:17:23.755861 | orchestrator | 2025-05-17 00:17:23.755876 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-17 00:17:24.134803 | orchestrator | ok: [testbed-manager] 2025-05-17 00:17:24.134889 | orchestrator | 2025-05-17 00:17:24.134900 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-17 00:17:24.187298 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:17:24.187372 | orchestrator | 2025-05-17 00:17:24.187386 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-17 00:17:24.813802 | orchestrator | changed: [testbed-manager] 2025-05-17 00:17:24.813947 | orchestrator | 2025-05-17 00:17:24.813977 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-17 00:17:24.891666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-17 00:17:24.891761 | orchestrator | 2025-05-17 00:17:24.891775 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-17 00:17:25.654999 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-17 00:17:25.655128 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-17 00:17:25.655145 | orchestrator | 2025-05-17 00:17:25.655158 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-17 00:17:26.312187 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-17 00:17:26.312292 | orchestrator | 2025-05-17 00:17:26.312308 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-17 00:17:26.961287 | orchestrator | changed: [testbed-manager] 2025-05-17 00:17:26.961393 | orchestrator | 2025-05-17 00:17:26.961411 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-17 00:17:27.012082 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:17:27.012173 | orchestrator | 2025-05-17 00:17:27.012187 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-17 00:17:27.661953 | orchestrator | changed: [testbed-manager] 2025-05-17 00:17:27.662108 | orchestrator | 2025-05-17 00:17:27.662125 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-17 00:17:29.414621 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-17 00:17:29.414782 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-17 00:17:29.414799 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-17 00:17:29.414812 | orchestrator | changed: [testbed-manager] 2025-05-17 00:17:29.414824 | orchestrator | 2025-05-17 00:17:29.414836 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-17 00:17:35.276140 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-17 00:17:35.276258 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-17 00:17:35.276274 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-17 00:17:35.276286 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-17 00:17:35.276321 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-17 00:17:35.276332 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-17 00:17:35.276343 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-17 00:17:35.276374 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-17 00:17:35.276387 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-17 00:17:35.276398 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-17 00:17:35.276409 | orchestrator | 2025-05-17 00:17:35.276421 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-17 00:17:35.939532 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-17 00:17:35.939702 | orchestrator | 2025-05-17 00:17:35.939723 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-17 00:17:36.031251 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-17 00:17:36.031344 | orchestrator | 2025-05-17 00:17:36.031357 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-17 00:17:36.802881 | orchestrator | changed: [testbed-manager] 2025-05-17 00:17:36.802983 | orchestrator | 2025-05-17 00:17:36.802998 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-17 00:17:37.419034 | orchestrator | ok: [testbed-manager] 2025-05-17 00:17:37.419155 | orchestrator | 2025-05-17 00:17:37.419171 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-17 00:17:38.151126 | orchestrator | changed: [testbed-manager] 2025-05-17 00:17:38.151231 | orchestrator | 2025-05-17 00:17:38.151248 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-17 00:17:40.540974 | orchestrator | ok: [testbed-manager] 2025-05-17 00:17:40.541082 | orchestrator | 2025-05-17 00:17:40.541099 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-17 00:17:42.481934 | orchestrator | ok: [testbed-manager] 2025-05-17 00:17:42.482157 | orchestrator | 2025-05-17 00:17:42.482178 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-17 00:18:04.652832 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-17 00:18:04.652952 | orchestrator | ok: [testbed-manager] 2025-05-17 00:18:04.652969 | orchestrator | 2025-05-17 00:18:04.652981 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-17 00:18:04.712487 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:18:04.712575 | orchestrator | 2025-05-17 00:18:04.712589 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-17 00:18:04.712601 | orchestrator | 2025-05-17 00:18:04.712612 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-17 00:18:04.753891 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:18:04.753967 | orchestrator | 2025-05-17 00:18:04.753980 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-17 00:18:04.824976 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-17 00:18:04.825078 | orchestrator | 2025-05-17 00:18:04.825093 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-17 00:18:05.656590 | orchestrator | ok: [testbed-manager] 2025-05-17 00:18:05.656757 | orchestrator | 2025-05-17 00:18:05.656776 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-17 00:18:05.734451 | orchestrator | ok: [testbed-manager] 2025-05-17 00:18:05.734547 | orchestrator | 2025-05-17 00:18:05.734563 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-17 00:18:05.795720 | orchestrator | ok: [testbed-manager] => { 2025-05-17 00:18:05.795798 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-17 00:18:05.795812 | orchestrator | } 2025-05-17 00:18:05.795823 | orchestrator | 2025-05-17 00:18:05.795835 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-17 00:18:06.441894 | orchestrator | ok: [testbed-manager] 2025-05-17 00:18:06.442097 | orchestrator | 2025-05-17 00:18:06.442117 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-17 00:18:07.285330 | orchestrator | ok: [testbed-manager] 2025-05-17 00:18:07.359795 | orchestrator | 2025-05-17 00:18:07.359885 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-17 00:18:07.359924 | orchestrator | ok: [testbed-manager] 2025-05-17 00:18:07.359936 | orchestrator | 2025-05-17 00:18:07.359948 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-17 00:18:07.401141 | orchestrator | ok: [testbed-manager] => { 2025-05-17 00:18:07.401237 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-17 00:18:07.401253 | orchestrator | } 2025-05-17 00:18:07.401265 | orchestrator | 2025-05-17 00:18:07.401276 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-17 00:18:07.458558 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:18:07.458635 | orchestrator | 2025-05-17 00:18:07.458649 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-17 00:18:07.519859 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:18:07.519936 | orchestrator | 2025-05-17 00:18:07.519949 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-17 00:18:07.579508 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:18:07.579573 | orchestrator | 2025-05-17 00:18:07.579586 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-17 00:18:07.630395 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:18:07.630450 | orchestrator | 2025-05-17 00:18:07.630464 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-17 00:18:07.684177 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:18:07.684223 | orchestrator | 2025-05-17 00:18:07.684236 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-17 00:18:07.784949 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:18:07.785031 | orchestrator | 2025-05-17 00:18:07.785050 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-17 00:18:09.192232 | orchestrator | changed: [testbed-manager] 2025-05-17 00:18:09.192334 | orchestrator | 2025-05-17 00:18:09.192350 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-17 00:18:09.262298 | orchestrator | ok: [testbed-manager] 2025-05-17 00:18:09.262355 | orchestrator | 2025-05-17 00:18:09.262368 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-17 00:19:09.319403 | orchestrator | Pausing for 60 seconds 2025-05-17 00:19:09.319525 | orchestrator | changed: [testbed-manager] 2025-05-17 00:19:09.319541 | orchestrator | 2025-05-17 00:19:09.319555 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-17 00:19:09.388673 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-17 00:19:09.388798 | orchestrator | 2025-05-17 00:19:09.388812 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-17 00:23:20.914177 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-17 00:23:20.914284 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-17 00:23:20.914298 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-17 00:23:20.914308 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-17 00:23:20.914317 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-17 00:23:20.914327 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-17 00:23:20.914338 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-17 00:23:20.914345 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-17 00:23:20.914351 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-17 00:23:20.914378 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-17 00:23:20.914384 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-17 00:23:20.914390 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-17 00:23:20.914396 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-17 00:23:20.914402 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-17 00:23:20.914407 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-17 00:23:20.914415 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-17 00:23:20.914421 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-17 00:23:20.914427 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-17 00:23:20.914437 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-17 00:23:20.914447 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-17 00:23:20.914456 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-17 00:23:20.914469 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-05-17 00:23:20.914481 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-05-17 00:23:20.914491 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-05-17 00:23:20.914501 | orchestrator | changed: [testbed-manager] 2025-05-17 00:23:20.914512 | orchestrator | 2025-05-17 00:23:20.914522 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-17 00:23:20.914531 | orchestrator | 2025-05-17 00:23:20.914538 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-17 00:23:22.856263 | orchestrator | ok: [testbed-manager] 2025-05-17 00:23:22.856366 | orchestrator | 2025-05-17 00:23:22.856381 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-17 00:23:22.960526 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-17 00:23:22.960609 | orchestrator | 2025-05-17 00:23:22.960622 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-17 00:23:23.030836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-17 00:23:23.030916 | orchestrator | 2025-05-17 00:23:23.030929 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-17 00:23:24.582838 | orchestrator | ok: [testbed-manager] 2025-05-17 00:23:24.582971 | orchestrator | 2025-05-17 00:23:24.582996 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-17 00:23:24.640680 | orchestrator | ok: [testbed-manager] 2025-05-17 00:23:24.640793 | orchestrator | 2025-05-17 00:23:24.640810 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-17 00:23:24.719379 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-17 00:23:24.719479 | orchestrator | 2025-05-17 00:23:24.719495 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-17 00:23:27.521588 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-17 00:23:27.521700 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-17 00:23:27.521715 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-17 00:23:27.521727 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-17 00:23:27.521811 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-17 00:23:27.521825 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-17 00:23:27.521837 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-17 00:23:27.521847 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-17 00:23:27.521859 | orchestrator | 2025-05-17 00:23:27.521871 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-17 00:23:28.187222 | orchestrator | changed: [testbed-manager] 2025-05-17 00:23:28.187324 | orchestrator | 2025-05-17 00:23:28.187338 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-17 00:23:28.272429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-17 00:23:28.272519 | orchestrator | 2025-05-17 00:23:28.272532 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-17 00:23:29.494817 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-17 00:23:29.494931 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-17 00:23:29.494947 | orchestrator | 2025-05-17 00:23:29.494960 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-17 00:23:30.131902 | orchestrator | changed: [testbed-manager] 2025-05-17 00:23:30.131986 | orchestrator | 2025-05-17 00:23:30.131996 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-17 00:23:30.201011 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:23:30.201095 | orchestrator | 2025-05-17 00:23:30.201112 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-17 00:23:30.274965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-17 00:23:30.275050 | orchestrator | 2025-05-17 00:23:30.275090 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-17 00:23:31.634318 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-17 00:23:31.634433 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-17 00:23:31.634449 | orchestrator | changed: [testbed-manager] 2025-05-17 00:23:31.634462 | orchestrator | 2025-05-17 00:23:31.634474 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-17 00:23:32.282881 | orchestrator | changed: [testbed-manager] 2025-05-17 00:23:32.282983 | orchestrator | 2025-05-17 00:23:32.282999 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-17 00:23:32.377365 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-17 00:23:32.377439 | orchestrator | 2025-05-17 00:23:32.377453 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-17 00:23:33.535463 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-17 00:23:33.535568 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-17 00:23:33.535583 | orchestrator | changed: [testbed-manager] 2025-05-17 00:23:33.535596 | orchestrator | 2025-05-17 00:23:33.535608 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-17 00:23:34.183687 | orchestrator | changed: [testbed-manager] 2025-05-17 00:23:34.183825 | orchestrator | 2025-05-17 00:23:34.183843 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-17 00:23:34.308488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-17 00:23:34.308588 | orchestrator | 2025-05-17 00:23:34.308607 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-17 00:23:34.914265 | orchestrator | changed: [testbed-manager] 2025-05-17 00:23:34.914353 | orchestrator | 2025-05-17 00:23:34.914369 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-17 00:23:35.326556 | orchestrator | changed: [testbed-manager] 2025-05-17 00:23:35.326651 | orchestrator | 2025-05-17 00:23:35.326667 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-17 00:23:36.579276 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-17 00:23:36.579386 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-17 00:23:36.579402 | orchestrator | 2025-05-17 00:23:36.579415 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-17 00:23:37.328320 | orchestrator | changed: [testbed-manager] 2025-05-17 00:23:37.328412 | orchestrator | 2025-05-17 00:23:37.328428 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-17 00:23:37.750364 | orchestrator | ok: [testbed-manager] 2025-05-17 00:23:37.750444 | orchestrator | 2025-05-17 00:23:37.750458 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-17 00:23:38.115128 | orchestrator | changed: [testbed-manager] 2025-05-17 00:23:38.115216 | orchestrator | 2025-05-17 00:23:38.115231 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-17 00:23:38.155225 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:23:38.155285 | orchestrator | 2025-05-17 00:23:38.155297 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-17 00:23:38.238591 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-17 00:23:38.238649 | orchestrator | 2025-05-17 00:23:38.238663 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-17 00:23:38.283129 | orchestrator | ok: [testbed-manager] 2025-05-17 00:23:38.283194 | orchestrator | 2025-05-17 00:23:38.283206 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-17 00:23:40.314658 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-17 00:23:40.314803 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-17 00:23:40.314820 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-17 00:23:40.314832 | orchestrator | 2025-05-17 00:23:40.314844 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-17 00:23:41.026329 | orchestrator | changed: [testbed-manager] 2025-05-17 00:23:41.026435 | orchestrator | 2025-05-17 00:23:41.026458 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-17 00:23:41.761810 | orchestrator | changed: [testbed-manager] 2025-05-17 00:23:41.761917 | orchestrator | 2025-05-17 00:23:41.761932 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-17 00:23:42.492672 | orchestrator | changed: [testbed-manager] 2025-05-17 00:23:42.493526 | orchestrator | 2025-05-17 00:23:42.493559 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-17 00:23:42.567519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-17 00:23:42.567603 | orchestrator | 2025-05-17 00:23:42.567616 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-17 00:23:42.618363 | orchestrator | ok: [testbed-manager] 2025-05-17 00:23:42.618442 | orchestrator | 2025-05-17 00:23:42.618455 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-17 00:23:43.342953 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-17 00:23:43.343048 | orchestrator | 2025-05-17 00:23:43.343062 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-17 00:23:43.440256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-17 00:23:43.440341 | orchestrator | 2025-05-17 00:23:43.440354 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-17 00:23:44.165335 | orchestrator | changed: [testbed-manager] 2025-05-17 00:23:44.165457 | orchestrator | 2025-05-17 00:23:44.165482 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-17 00:23:44.775255 | orchestrator | ok: [testbed-manager] 2025-05-17 00:23:44.775360 | orchestrator | 2025-05-17 00:23:44.775376 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-17 00:23:44.820975 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:23:44.821056 | orchestrator | 2025-05-17 00:23:44.821089 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-17 00:23:44.861146 | orchestrator | ok: [testbed-manager] 2025-05-17 00:23:44.861228 | orchestrator | 2025-05-17 00:23:44.861249 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-17 00:23:45.686830 | orchestrator | changed: [testbed-manager] 2025-05-17 00:23:45.686902 | orchestrator | 2025-05-17 00:23:45.686908 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-17 00:24:26.329586 | orchestrator | changed: [testbed-manager] 2025-05-17 00:24:26.329705 | orchestrator | 2025-05-17 00:24:26.329722 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-17 00:24:27.008080 | orchestrator | ok: [testbed-manager] 2025-05-17 00:24:27.008177 | orchestrator | 2025-05-17 00:24:27.008191 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-17 00:24:29.791926 | orchestrator | changed: [testbed-manager] 2025-05-17 00:24:29.792033 | orchestrator | 2025-05-17 00:24:29.792049 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-17 00:24:29.860784 | orchestrator | ok: [testbed-manager] 2025-05-17 00:24:29.860858 | orchestrator | 2025-05-17 00:24:29.860872 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-17 00:24:29.860884 | orchestrator | 2025-05-17 00:24:29.860895 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-17 00:24:29.903985 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:24:29.904013 | orchestrator | 2025-05-17 00:24:29.904025 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-17 00:25:29.959375 | orchestrator | Pausing for 60 seconds 2025-05-17 00:25:29.959496 | orchestrator | changed: [testbed-manager] 2025-05-17 00:25:29.959512 | orchestrator | 2025-05-17 00:25:29.959525 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-17 00:25:35.526137 | orchestrator | changed: [testbed-manager] 2025-05-17 00:25:35.527083 | orchestrator | 2025-05-17 00:25:35.527119 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-17 00:26:17.173440 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-17 00:26:17.173564 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-17 00:26:17.173579 | orchestrator | changed: [testbed-manager] 2025-05-17 00:26:17.173593 | orchestrator | 2025-05-17 00:26:17.173605 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-17 00:26:22.870415 | orchestrator | changed: [testbed-manager] 2025-05-17 00:26:22.870538 | orchestrator | 2025-05-17 00:26:22.870555 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-17 00:26:22.969796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-17 00:26:22.969889 | orchestrator | 2025-05-17 00:26:22.969902 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-17 00:26:22.969914 | orchestrator | 2025-05-17 00:26:22.969926 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-17 00:26:23.027254 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:26:23.027318 | orchestrator | 2025-05-17 00:26:23.027331 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:26:23.027345 | orchestrator | testbed-manager : ok=109 changed=57 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-17 00:26:23.027356 | orchestrator | 2025-05-17 00:26:23.157316 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-17 00:26:23.157397 | orchestrator | + deactivate 2025-05-17 00:26:23.157410 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-17 00:26:23.157423 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-17 00:26:23.157434 | orchestrator | + export PATH 2025-05-17 00:26:23.157445 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-17 00:26:23.157457 | orchestrator | + '[' -n '' ']' 2025-05-17 00:26:23.157491 | orchestrator | + hash -r 2025-05-17 00:26:23.157502 | orchestrator | + '[' -n '' ']' 2025-05-17 00:26:23.157513 | orchestrator | + unset VIRTUAL_ENV 2025-05-17 00:26:23.157524 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-17 00:26:23.157566 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-17 00:26:23.157578 | orchestrator | + unset -f deactivate 2025-05-17 00:26:23.157590 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-17 00:26:23.169464 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-17 00:26:23.169512 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-17 00:26:23.169524 | orchestrator | + local max_attempts=60 2025-05-17 00:26:23.169535 | orchestrator | + local name=ceph-ansible 2025-05-17 00:26:23.169546 | orchestrator | + local attempt_num=1 2025-05-17 00:26:23.170494 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-17 00:26:23.203672 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-17 00:26:23.203707 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-17 00:26:23.203718 | orchestrator | + local max_attempts=60 2025-05-17 00:26:23.203729 | orchestrator | + local name=kolla-ansible 2025-05-17 00:26:23.203741 | orchestrator | + local attempt_num=1 2025-05-17 00:26:23.204788 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-17 00:26:23.235932 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-17 00:26:23.235977 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-17 00:26:23.235989 | orchestrator | + local max_attempts=60 2025-05-17 00:26:23.236000 | orchestrator | + local name=osism-ansible 2025-05-17 00:26:23.236011 | orchestrator | + local attempt_num=1 2025-05-17 00:26:23.236557 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-17 00:26:23.271754 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-17 00:26:23.271837 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-17 00:26:23.271849 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-17 00:26:23.955046 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-17 00:26:24.007967 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-17 00:26:24.008054 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-17 00:26:24.008069 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-17 00:26:24.208596 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-17 00:26:24.208697 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-17 00:26:24.208712 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-17 00:26:24.208746 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-17 00:26:24.208806 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-17 00:26:24.208824 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-05-17 00:26:24.208836 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-05-17 00:26:24.209682 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-05-17 00:26:24.209705 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 48 seconds (healthy) 2025-05-17 00:26:24.209717 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-05-17 00:26:24.209753 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-17 00:26:24.209793 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-05-17 00:26:24.209805 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-05-17 00:26:24.209816 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-17 00:26:24.209826 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-05-17 00:26:24.209837 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-17 00:26:24.209848 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-17 00:26:24.209859 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-05-17 00:26:24.215054 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-17 00:26:24.382172 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-17 00:26:24.382269 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-05-17 00:26:24.382283 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-17 00:26:24.382295 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-05-17 00:26:24.382308 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-05-17 00:26:24.389428 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-17 00:26:24.440015 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-17 00:26:24.440100 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-17 00:26:24.444640 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-17 00:26:26.004291 | orchestrator | 2025-05-17 00:26:26 | INFO  | Task 6fc1391b-5d85-4684-9490-5704ba8b0c21 (resolvconf) was prepared for execution. 2025-05-17 00:26:26.004392 | orchestrator | 2025-05-17 00:26:26 | INFO  | It takes a moment until task 6fc1391b-5d85-4684-9490-5704ba8b0c21 (resolvconf) has been started and output is visible here. 2025-05-17 00:26:28.978936 | orchestrator | 2025-05-17 00:26:28.979347 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-17 00:26:28.979719 | orchestrator | 2025-05-17 00:26:28.981695 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-17 00:26:28.981723 | orchestrator | Saturday 17 May 2025 00:26:28 +0000 (0:00:00.084) 0:00:00.084 ********** 2025-05-17 00:26:32.998517 | orchestrator | ok: [testbed-manager] 2025-05-17 00:26:32.998757 | orchestrator | 2025-05-17 00:26:33.000401 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-17 00:26:33.001495 | orchestrator | Saturday 17 May 2025 00:26:32 +0000 (0:00:04.022) 0:00:04.106 ********** 2025-05-17 00:26:33.055106 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:26:33.056318 | orchestrator | 2025-05-17 00:26:33.057016 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-17 00:26:33.057625 | orchestrator | Saturday 17 May 2025 00:26:33 +0000 (0:00:00.055) 0:00:04.162 ********** 2025-05-17 00:26:33.137134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-17 00:26:33.137223 | orchestrator | 2025-05-17 00:26:33.137297 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-17 00:26:33.137708 | orchestrator | Saturday 17 May 2025 00:26:33 +0000 (0:00:00.083) 0:00:04.246 ********** 2025-05-17 00:26:33.209127 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-17 00:26:33.209631 | orchestrator | 2025-05-17 00:26:33.209955 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-17 00:26:33.210658 | orchestrator | Saturday 17 May 2025 00:26:33 +0000 (0:00:00.071) 0:00:04.317 ********** 2025-05-17 00:26:34.265673 | orchestrator | ok: [testbed-manager] 2025-05-17 00:26:34.266186 | orchestrator | 2025-05-17 00:26:34.267536 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-17 00:26:34.267922 | orchestrator | Saturday 17 May 2025 00:26:34 +0000 (0:00:01.055) 0:00:05.372 ********** 2025-05-17 00:26:34.322477 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:26:34.322858 | orchestrator | 2025-05-17 00:26:34.323205 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-17 00:26:34.323622 | orchestrator | Saturday 17 May 2025 00:26:34 +0000 (0:00:00.057) 0:00:05.430 ********** 2025-05-17 00:26:34.822119 | orchestrator | ok: [testbed-manager] 2025-05-17 00:26:34.823004 | orchestrator | 2025-05-17 00:26:34.823536 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-17 00:26:34.824255 | orchestrator | Saturday 17 May 2025 00:26:34 +0000 (0:00:00.500) 0:00:05.930 ********** 2025-05-17 00:26:34.921560 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:26:34.922758 | orchestrator | 2025-05-17 00:26:34.923499 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-17 00:26:34.923730 | orchestrator | Saturday 17 May 2025 00:26:34 +0000 (0:00:00.097) 0:00:06.028 ********** 2025-05-17 00:26:35.498093 | orchestrator | changed: [testbed-manager] 2025-05-17 00:26:35.498195 | orchestrator | 2025-05-17 00:26:35.498531 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-17 00:26:35.499287 | orchestrator | Saturday 17 May 2025 00:26:35 +0000 (0:00:00.574) 0:00:06.603 ********** 2025-05-17 00:26:36.598102 | orchestrator | changed: [testbed-manager] 2025-05-17 00:26:36.598357 | orchestrator | 2025-05-17 00:26:36.599166 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-17 00:26:36.599612 | orchestrator | Saturday 17 May 2025 00:26:36 +0000 (0:00:01.101) 0:00:07.705 ********** 2025-05-17 00:26:37.557551 | orchestrator | ok: [testbed-manager] 2025-05-17 00:26:37.557702 | orchestrator | 2025-05-17 00:26:37.558127 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-17 00:26:37.558517 | orchestrator | Saturday 17 May 2025 00:26:37 +0000 (0:00:00.958) 0:00:08.663 ********** 2025-05-17 00:26:37.643815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-17 00:26:37.644011 | orchestrator | 2025-05-17 00:26:37.644754 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-17 00:26:37.646253 | orchestrator | Saturday 17 May 2025 00:26:37 +0000 (0:00:00.088) 0:00:08.751 ********** 2025-05-17 00:26:38.801984 | orchestrator | changed: [testbed-manager] 2025-05-17 00:26:38.802546 | orchestrator | 2025-05-17 00:26:38.803399 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:26:38.803964 | orchestrator | 2025-05-17 00:26:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:26:38.804447 | orchestrator | 2025-05-17 00:26:38 | INFO  | Please wait and do not abort execution. 2025-05-17 00:26:38.805295 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-17 00:26:38.807587 | orchestrator | 2025-05-17 00:26:38.808625 | orchestrator | Saturday 17 May 2025 00:26:38 +0000 (0:00:01.155) 0:00:09.907 ********** 2025-05-17 00:26:38.809214 | orchestrator | =============================================================================== 2025-05-17 00:26:38.809715 | orchestrator | Gathering Facts --------------------------------------------------------- 4.02s 2025-05-17 00:26:38.810909 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2025-05-17 00:26:38.811423 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.10s 2025-05-17 00:26:38.811924 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.06s 2025-05-17 00:26:38.812505 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.96s 2025-05-17 00:26:38.813425 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.57s 2025-05-17 00:26:38.814096 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2025-05-17 00:26:38.814537 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.10s 2025-05-17 00:26:38.814943 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-05-17 00:26:38.815524 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-05-17 00:26:38.816226 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-05-17 00:26:38.816919 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-05-17 00:26:38.817498 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-05-17 00:26:39.209967 | orchestrator | + osism apply sshconfig 2025-05-17 00:26:40.601525 | orchestrator | 2025-05-17 00:26:40 | INFO  | Task 59751d6b-299d-47c0-af79-1c4c4cdcd435 (sshconfig) was prepared for execution. 2025-05-17 00:26:40.601648 | orchestrator | 2025-05-17 00:26:40 | INFO  | It takes a moment until task 59751d6b-299d-47c0-af79-1c4c4cdcd435 (sshconfig) has been started and output is visible here. 2025-05-17 00:26:43.610473 | orchestrator | 2025-05-17 00:26:43.610641 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-17 00:26:43.610662 | orchestrator | 2025-05-17 00:26:43.611183 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-17 00:26:43.611209 | orchestrator | Saturday 17 May 2025 00:26:43 +0000 (0:00:00.105) 0:00:00.105 ********** 2025-05-17 00:26:44.153075 | orchestrator | ok: [testbed-manager] 2025-05-17 00:26:44.153981 | orchestrator | 2025-05-17 00:26:44.155166 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-17 00:26:44.156264 | orchestrator | Saturday 17 May 2025 00:26:44 +0000 (0:00:00.541) 0:00:00.647 ********** 2025-05-17 00:26:44.618531 | orchestrator | changed: [testbed-manager] 2025-05-17 00:26:44.619198 | orchestrator | 2025-05-17 00:26:44.619387 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-17 00:26:44.620472 | orchestrator | Saturday 17 May 2025 00:26:44 +0000 (0:00:00.465) 0:00:01.113 ********** 2025-05-17 00:26:50.299194 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-17 00:26:50.299676 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-17 00:26:50.300931 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-17 00:26:50.301938 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-17 00:26:50.302955 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-17 00:26:50.303868 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-17 00:26:50.305300 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-17 00:26:50.305894 | orchestrator | 2025-05-17 00:26:50.306535 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-17 00:26:50.307329 | orchestrator | Saturday 17 May 2025 00:26:50 +0000 (0:00:05.680) 0:00:06.793 ********** 2025-05-17 00:26:50.370164 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:26:50.370448 | orchestrator | 2025-05-17 00:26:50.372689 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-17 00:26:50.372713 | orchestrator | Saturday 17 May 2025 00:26:50 +0000 (0:00:00.072) 0:00:06.866 ********** 2025-05-17 00:26:50.937305 | orchestrator | changed: [testbed-manager] 2025-05-17 00:26:50.937913 | orchestrator | 2025-05-17 00:26:50.938972 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:26:50.939605 | orchestrator | 2025-05-17 00:26:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:26:50.939630 | orchestrator | 2025-05-17 00:26:50 | INFO  | Please wait and do not abort execution. 2025-05-17 00:26:50.941878 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-17 00:26:50.942637 | orchestrator | 2025-05-17 00:26:50.943871 | orchestrator | Saturday 17 May 2025 00:26:50 +0000 (0:00:00.566) 0:00:07.432 ********** 2025-05-17 00:26:50.944465 | orchestrator | =============================================================================== 2025-05-17 00:26:50.945161 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.68s 2025-05-17 00:26:50.946102 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2025-05-17 00:26:50.947053 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.54s 2025-05-17 00:26:50.947741 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.47s 2025-05-17 00:26:50.948590 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-05-17 00:26:51.329998 | orchestrator | + osism apply known-hosts 2025-05-17 00:26:52.719608 | orchestrator | 2025-05-17 00:26:52 | INFO  | Task 8d8e6e17-61d0-4946-9618-c4dbc706efe7 (known-hosts) was prepared for execution. 2025-05-17 00:26:52.719709 | orchestrator | 2025-05-17 00:26:52 | INFO  | It takes a moment until task 8d8e6e17-61d0-4946-9618-c4dbc706efe7 (known-hosts) has been started and output is visible here. 2025-05-17 00:26:55.686604 | orchestrator | 2025-05-17 00:26:55.687117 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-17 00:26:55.688227 | orchestrator | 2025-05-17 00:26:55.689587 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-17 00:26:55.690477 | orchestrator | Saturday 17 May 2025 00:26:55 +0000 (0:00:00.105) 0:00:00.105 ********** 2025-05-17 00:27:01.592819 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-17 00:27:01.593386 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-17 00:27:01.593657 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-17 00:27:01.595022 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-17 00:27:01.596281 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-17 00:27:01.596643 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-17 00:27:01.597556 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-17 00:27:01.598100 | orchestrator | 2025-05-17 00:27:01.598875 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-17 00:27:01.599274 | orchestrator | Saturday 17 May 2025 00:27:01 +0000 (0:00:05.908) 0:00:06.014 ********** 2025-05-17 00:27:01.778738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-17 00:27:01.779228 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-17 00:27:01.781988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-17 00:27:01.782715 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-17 00:27:01.782741 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-17 00:27:01.783300 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-17 00:27:01.785192 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-17 00:27:01.785924 | orchestrator | 2025-05-17 00:27:01.786973 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-17 00:27:01.787552 | orchestrator | Saturday 17 May 2025 00:27:01 +0000 (0:00:00.184) 0:00:06.198 ********** 2025-05-17 00:27:02.979689 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB/x6vwVOWK08TufiA6SgcX5lhofHY+frr/h8J21EiA5) 2025-05-17 00:27:02.983334 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChwPSr2rlTF1pOrGuUIvWW1zNr4iSeLt2oVW9g1J0XgS0yLVvEz7/GHnu/kBFABHeQEie7fxWuYZmFUwKsjswQmLHTE71JZhLmpt/Kh1x7+GUjK44M1PsXE46BT9fDQI+Vd5p7Xs476wGtyujuRJ+ZyYry3JhDcjTDWKTmQM8ZxOlwLIe5UEhjvx9Lq1/Y9Kl6MAjfa+qmwpmPjyeHjg5FinlyvJY26fFbr2oyqV1/MCylwhnMe3nzJBr005MM5dbOC+Y2ogDOHTSVasFms9ewgboo3aFDzi3ACZXeDDnH9z+GyPmTGj1Eqb8ZF7FIb/jrhjz/OrCdhGe6gWUN3IUxLfLiSfKvLOjYWag5TwvCWpiAjBnhzEzkc21CQTOTeXwZR4gIL21wIHjWFrD3OKk5akLDbirRBvX2bthqcgwEK6N+Qh2nPP7tugyTozpYcHNHPbBh6/g8pvjclXwB56xmyGzt1NuXTdWrebcAAkxE7weGwutEdrVu3ukwNp/RImU=) 2025-05-17 00:27:02.984034 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHdK9OE/Nqll98FlmQlaDCtIeOTcv1W+IYHdpEGTU+k6EAhWsOdhVDxTRCb2om4omc1KQHp/n439a/GkipVpJ3g=) 2025-05-17 00:27:02.984622 | orchestrator | 2025-05-17 00:27:02.985101 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-17 00:27:02.985826 | orchestrator | Saturday 17 May 2025 00:27:02 +0000 (0:00:01.200) 0:00:07.399 ********** 2025-05-17 00:27:04.028326 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3a0K5MYKtf0Z6+ayIn8fHf/VA21sgC4m5u6007rmxAJ71wDjrIORkUr/C2/Zj0iIzEafk9GJ9oYYPLhh1prhPSqNacJjZ0iRdv4VYcVb/CEnFOSLMNOVBinPElG43Mn2dSH2l5EHIg3uf1Qh1azzWnUE7TTrh152gQZyPFgZl0VZyULXt37Y6AKlf7caR2LZMy4lFCkUXVYBuQDZt+4M8OI1icC94fMCG5aPERRyd1os/0g+Y1xkBEjnQuuvOiQcfI1Tjmlvtho7hFeY7mtAaeh4lyzokdVCzsQMGRRyvg+NfH5w0sxAZ2HxtRn8hteBVtamzOAmsJrRQ0f8Dw6T8tUHXDTH1K6BotbW/5eonbg8w3Wl8UV1JvjhUq3pFoZdFTVac79IIzCEn89OqsexwEF2kzn9SmkYysjdhH9c9apQ96whFBBg0tNbybsjkRqJ1GPGN5P833cfE4m3HxDhlS7/PMo7z0kr1HywRVYxdloNOeFm8MojyKmt+Re8QlyM=) 2025-05-17 00:27:04.028467 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIwVTi5K55AxCMVQ01mta8jC5VVjkepi8Q1eR0H82RdH7xITbTgWxKe1V9lUXUsWMOoR3rT5QtDOdrLKmFN5O/k=) 2025-05-17 00:27:04.028565 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINQT2Y2ZD7RVh+T5A9YNRy3mBB6BDs3341Y2WWCKqtEJ) 2025-05-17 00:27:04.028900 | orchestrator | 2025-05-17 00:27:04.029393 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-17 00:27:04.029852 | orchestrator | Saturday 17 May 2025 00:27:04 +0000 (0:00:01.049) 0:00:08.448 ********** 2025-05-17 00:27:05.121383 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOGwr2RZ6u6vPYQASZrrMl0TNaKVqpSdYNLk/Git5kfRN+BKUhyiNfHmq+RMgmaHucEs+gaYVcuuz/broSFXIZg=) 2025-05-17 00:27:05.121637 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCw2Hqifi7Ovu7z2rp+i6vyH3F2CP2lZcc8GzzslenyF7hMLduCBufCmWXbM+W0Zc9QynrrEvu/9cTAoWeUGVd1hrTFeV1b9NCtU9sA1XBnunWPA80E2Qr6Ubv+CdIhx7vne4rJUvvwoJjVg4VhWfIb1kHkc0YEb/e/x5hOnQql9oAnwHXsRGWkvOvyXUQ0zCPKj/Sh6vNhN840qvkiubi3g85FaQHMYelOt6RgAyTRdcyDuhFZJmwGwn7PSiQcT2jyWH71mOvZdxe0YyZv0QXjSW0rUWrKpo9+HlMedCgVmnholXT5OvjELIbMO7kGOBKgOhikLSYHElnfabuEt5x3JHN3btryDuBe/SGKsYHJLVfXwAP4kGHCvuE2dYulRZr7Wss4KI9k9Up9/uJGL4RkiJei+auzHh/bKcrmsnMHb+N72vyo1KfGL8RIpznx5x/Kw+Mg+AVciOAMxxYiQNLIFnWfAaezxoIBUoItNV1S1obni/jvxg30vSmElc2El7c=) 2025-05-17 00:27:05.122483 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN69WZ+pbsmS+CiSiP8veD9Jm5U8+gUp0s3ZU2HyTXte) 2025-05-17 00:27:05.122704 | orchestrator | 2025-05-17 00:27:05.124609 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-17 00:27:05.124679 | orchestrator | Saturday 17 May 2025 00:27:05 +0000 (0:00:01.092) 0:00:09.541 ********** 2025-05-17 00:27:06.181928 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDbOz8JfVelZPYZvhEtxwYJOeHmdYNhp4Azq+qNr0DxT+0+qME+NIYMSYPwgpVH5xEah+ZA02iWEPLNg7N0wtV+P1c8af3o3MzgW0yX9RmIz0AC+e/B+EgFOfV1SfmGIC3E2OxnBrUj5gjMHN0GW5drnFk/xRMdv6Ex2Wegu2jDTSZjLAOrPGfdDvQUhQN4T5VF0pfUOSubfJXiyJYUqiFbJoQTHlStBy0yC2OS9iJqg3wG5WGRRtW/J5o/YbC1LNYu4nqYfXYccWkAfcEz+lQtTGIUe8vXGLW6QooGsVCe0i+7s5ZkZSp3HxeGGXQrimDoSop19XiiD6tb1MoAe822JWwCsU1U8PQxgrqll8PzFIxTSzYgRLGhBwhM9ylmP0ohGN6XLg42Vma4+HpU6D5oG+hVghLUcajaMrwaSzHc+fGIqLtLQjcdPLCZNYNEi/G3mWshPn8Ju5xECV89S8zXt5FTQb7N2MLn3cRXsMAjpamhCO6mhGI2ukaEJT2NCPc=) 2025-05-17 00:27:06.182159 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGFd2HHMOZMtxG1mQ9q11z7sQm0lexk4jZG6F0f5QA1xJLSQUAOBNh6cfDGjWLXy0zVLjs75VdojrLiAit4SpNg=) 2025-05-17 00:27:06.186807 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILd+ht9We+xLtlJ/8JLZEjwXbZ5oczn/ViSoAqoen3yr) 2025-05-17 00:27:06.186847 | orchestrator | 2025-05-17 00:27:06.186859 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-17 00:27:06.186872 | orchestrator | Saturday 17 May 2025 00:27:06 +0000 (0:00:01.059) 0:00:10.601 ********** 2025-05-17 00:27:07.257885 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLEqDPU7kT+AOhIjIOmIoOlWuNFO2p4k99XKJGzVl/wsdnSW7iGq9C9TjRXKdDSZAzySJRlWi1VQwr4c7oMzg/j9ls3Ev052fmDmVlzal8ySUTs1BbMTfHTgreiuYu6UH+wrWfh3tmZAVfpm867muGnnFh/K1Zb3gSGxwljB2U+7Z/urn8sGmS6A5qQYkawxpj6+7OZwjt+fgRlZxMR4I5UYJaF/HHWKqfPZBRc0ER2F85pUGVuCYIlX22vn7wIXomHv3YGbFErqVxGczBeTW4ncwe7HCmW3LuEFe0HF29F6ZGvma0yr0/FIDRWQ0kslelFm6RLIgKRIskwJMaL+4e0gyHSkXMtB+c20gxJh1OZu4PmGA/Q3BGtInpPiiKpxNuDkVTB3Cz535Y5R05o5+MCnh+JUNCJ0qKj1wNzCEEgUA/qlw1e97Q3+9i3tiTDIdnRYYi4mdc7aQjHoY0sMqwdFyN0vNuaXVzuUHuQ7wdhD9cSYl267+BTWLmw08p8pE=) 2025-05-17 00:27:07.258167 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLj80q15IqzzHKHHzNke6HY1uHSTtSwKTVZr2tZo8pkSX6uypDV75Kv06YgJrxrRfo7VoRYfG9Z3qHFLRZj/waA=) 2025-05-17 00:27:07.259622 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOWsq2T+FNXEzmJr9bojGAVKNtIQmLCGWeZ4vLDQBd44) 2025-05-17 00:27:07.260687 | orchestrator | 2025-05-17 00:27:07.263613 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-17 00:27:07.264289 | orchestrator | Saturday 17 May 2025 00:27:07 +0000 (0:00:01.076) 0:00:11.678 ********** 2025-05-17 00:27:08.396340 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBlI6fygiKlkvxW78QJI7q6Z51kQBdRnmKrLS99I6MRYpLLvXr99y8mYVkunKF+3IhGMEhW1XaWnzKU7kXYetOk=) 2025-05-17 00:27:08.397826 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDclwj8e/tPzgwHsAv3S/ai+Vx3VysZDWrRXAGk3kwIVqAKppMZT8fRS8lS0Zl5RP7xHvODFIeTH0cU/92MUcEjPRY1FYHiaJ3QpKPEEsUplsXSgLlR6ZZ5m5CxbYxxuBclvHIudrCYrqPKAXcifo+Xax+YqME+sRLM2Elf9BoM9xSsr29bznheYp63DyXWDUj/Z0AtEP/VfdHmS/bpTZ3J6FUiMho7fRUTRcBO9XhSIRTsi4ruIqqAbFNzRoxhoBgCR2ZjX6ltpvdMAeNNeRhvPK7lqYaBZ7L3lBIXLM2T0rECDhi70n8lTtUHqtc7BIkVRe/o6Lb3FOuziMcEhDS8dmWMZCq15QLApe8Xz2EWQPWUa+EOIRYVicHiXoAc4vdGG6Kvm0XAuGYjQ27QOahJ7kseSUuuwktmvAOU7yh6pvOZCw8M8NxjiBKE+i+FLFZC7TuA9g41bOSekNC70/FpnNimRrF24sxVfroFYD8JgJMAKdwgbbC04GL1bV94NU=) 2025-05-17 00:27:08.398816 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDIyKPjUs/yY/W93lAcn1wAbZAyQLD3l/rrdz1s/4z6a) 2025-05-17 00:27:08.398971 | orchestrator | 2025-05-17 00:27:08.399429 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-17 00:27:08.400169 | orchestrator | Saturday 17 May 2025 00:27:08 +0000 (0:00:01.138) 0:00:12.817 ********** 2025-05-17 00:27:09.479254 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBInB0HTqHMK2YCH+fM67FX0m6VpP4N2RcM6ooRTZtyusAcGog7KbpzVBe4RVzxn+gz0Z2Jy5qXO/Qh/dIHlYueo=) 2025-05-17 00:27:09.480455 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDowDLBwh8trOHF1L1UYKvrFC7ppsbm2vYElGfm8NJPJ/UOjmMOf0jX1m89e+sG3SLV0aATM9ypIBAxgDEq8mMtMAz36tBHFHdfxykzEpYTDlg0091KiQPgWnrX3H+P8FxEyMrEd9MpWKHpi7BfcRRcA8IxwXx8fjHlqdS5bFvqeFxz9Cm1/9LITziheLC4CGaMTVg18RjslrcAjNazx2jBcJ8WoySB8r0vxGOmGzM8UWjTAOF7n5wMUN/KyuLXeew2jBjEapFzNybiUP7w8dFrRx3AgvfdfFJ2n2TYxDnSa7cQYFws+hQhTfSV1W/pG9zGT1jOSUxhmnnwpodzItePq7IciGwZneTxBLQ6kssZbv86MyfuaymASC8GKQADs4sLrpGLnWRg5rRMYBKnBCtrsfyXL3Y96mGpWMQ7zOzqn80OtxWiyy+J/NnG3VZaHJU7O/q+AsnXloqlyKrUFNyZxczN14xz5op+cmnSAXcVDPYUTrzTu1Tc4c8r8Hm1NIc=) 2025-05-17 00:27:09.481528 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIhFcro5VfdNnQlQHSyrxQXJYPXAfIFKgqpJeZFJkoQ9) 2025-05-17 00:27:09.482509 | orchestrator | 2025-05-17 00:27:09.484823 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-17 00:27:09.485012 | orchestrator | Saturday 17 May 2025 00:27:09 +0000 (0:00:01.082) 0:00:13.899 ********** 2025-05-17 00:27:14.851349 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-17 00:27:14.853159 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-17 00:27:14.853197 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-17 00:27:14.854139 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-17 00:27:14.855706 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-17 00:27:14.856269 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-17 00:27:14.856690 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-17 00:27:14.857198 | orchestrator | 2025-05-17 00:27:14.857963 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-17 00:27:14.858968 | orchestrator | Saturday 17 May 2025 00:27:14 +0000 (0:00:05.372) 0:00:19.272 ********** 2025-05-17 00:27:15.025233 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-17 00:27:15.025512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-17 00:27:15.026687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-17 00:27:15.028616 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-17 00:27:15.029621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-17 00:27:15.030100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-17 00:27:15.031674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-17 00:27:15.031917 | orchestrator | 2025-05-17 00:27:15.032351 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-17 00:27:15.033036 | orchestrator | Saturday 17 May 2025 00:27:15 +0000 (0:00:00.174) 0:00:19.447 ********** 2025-05-17 00:27:16.146282 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHdK9OE/Nqll98FlmQlaDCtIeOTcv1W+IYHdpEGTU+k6EAhWsOdhVDxTRCb2om4omc1KQHp/n439a/GkipVpJ3g=) 2025-05-17 00:27:16.146391 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChwPSr2rlTF1pOrGuUIvWW1zNr4iSeLt2oVW9g1J0XgS0yLVvEz7/GHnu/kBFABHeQEie7fxWuYZmFUwKsjswQmLHTE71JZhLmpt/Kh1x7+GUjK44M1PsXE46BT9fDQI+Vd5p7Xs476wGtyujuRJ+ZyYry3JhDcjTDWKTmQM8ZxOlwLIe5UEhjvx9Lq1/Y9Kl6MAjfa+qmwpmPjyeHjg5FinlyvJY26fFbr2oyqV1/MCylwhnMe3nzJBr005MM5dbOC+Y2ogDOHTSVasFms9ewgboo3aFDzi3ACZXeDDnH9z+GyPmTGj1Eqb8ZF7FIb/jrhjz/OrCdhGe6gWUN3IUxLfLiSfKvLOjYWag5TwvCWpiAjBnhzEzkc21CQTOTeXwZR4gIL21wIHjWFrD3OKk5akLDbirRBvX2bthqcgwEK6N+Qh2nPP7tugyTozpYcHNHPbBh6/g8pvjclXwB56xmyGzt1NuXTdWrebcAAkxE7weGwutEdrVu3ukwNp/RImU=) 2025-05-17 00:27:16.146410 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB/x6vwVOWK08TufiA6SgcX5lhofHY+frr/h8J21EiA5) 2025-05-17 00:27:16.147169 | orchestrator | 2025-05-17 00:27:16.147677 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-17 00:27:16.148977 | orchestrator | Saturday 17 May 2025 00:27:16 +0000 (0:00:01.120) 0:00:20.567 ********** 2025-05-17 00:27:17.250179 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3a0K5MYKtf0Z6+ayIn8fHf/VA21sgC4m5u6007rmxAJ71wDjrIORkUr/C2/Zj0iIzEafk9GJ9oYYPLhh1prhPSqNacJjZ0iRdv4VYcVb/CEnFOSLMNOVBinPElG43Mn2dSH2l5EHIg3uf1Qh1azzWnUE7TTrh152gQZyPFgZl0VZyULXt37Y6AKlf7caR2LZMy4lFCkUXVYBuQDZt+4M8OI1icC94fMCG5aPERRyd1os/0g+Y1xkBEjnQuuvOiQcfI1Tjmlvtho7hFeY7mtAaeh4lyzokdVCzsQMGRRyvg+NfH5w0sxAZ2HxtRn8hteBVtamzOAmsJrRQ0f8Dw6T8tUHXDTH1K6BotbW/5eonbg8w3Wl8UV1JvjhUq3pFoZdFTVac79IIzCEn89OqsexwEF2kzn9SmkYysjdhH9c9apQ96whFBBg0tNbybsjkRqJ1GPGN5P833cfE4m3HxDhlS7/PMo7z0kr1HywRVYxdloNOeFm8MojyKmt+Re8QlyM=) 2025-05-17 00:27:17.250725 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIwVTi5K55AxCMVQ01mta8jC5VVjkepi8Q1eR0H82RdH7xITbTgWxKe1V9lUXUsWMOoR3rT5QtDOdrLKmFN5O/k=) 2025-05-17 00:27:17.251357 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINQT2Y2ZD7RVh+T5A9YNRy3mBB6BDs3341Y2WWCKqtEJ) 2025-05-17 00:27:17.252796 | orchestrator | 2025-05-17 00:27:17.252889 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-17 00:27:17.253503 | orchestrator | Saturday 17 May 2025 00:27:17 +0000 (0:00:01.102) 0:00:21.670 ********** 2025-05-17 00:27:18.388804 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCw2Hqifi7Ovu7z2rp+i6vyH3F2CP2lZcc8GzzslenyF7hMLduCBufCmWXbM+W0Zc9QynrrEvu/9cTAoWeUGVd1hrTFeV1b9NCtU9sA1XBnunWPA80E2Qr6Ubv+CdIhx7vne4rJUvvwoJjVg4VhWfIb1kHkc0YEb/e/x5hOnQql9oAnwHXsRGWkvOvyXUQ0zCPKj/Sh6vNhN840qvkiubi3g85FaQHMYelOt6RgAyTRdcyDuhFZJmwGwn7PSiQcT2jyWH71mOvZdxe0YyZv0QXjSW0rUWrKpo9+HlMedCgVmnholXT5OvjELIbMO7kGOBKgOhikLSYHElnfabuEt5x3JHN3btryDuBe/SGKsYHJLVfXwAP4kGHCvuE2dYulRZr7Wss4KI9k9Up9/uJGL4RkiJei+auzHh/bKcrmsnMHb+N72vyo1KfGL8RIpznx5x/Kw+Mg+AVciOAMxxYiQNLIFnWfAaezxoIBUoItNV1S1obni/jvxg30vSmElc2El7c=) 2025-05-17 00:27:18.389284 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOGwr2RZ6u6vPYQASZrrMl0TNaKVqpSdYNLk/Git5kfRN+BKUhyiNfHmq+RMgmaHucEs+gaYVcuuz/broSFXIZg=) 2025-05-17 00:27:18.389686 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN69WZ+pbsmS+CiSiP8veD9Jm5U8+gUp0s3ZU2HyTXte) 2025-05-17 00:27:18.389715 | orchestrator | 2025-05-17 00:27:18.390576 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-17 00:27:18.390909 | orchestrator | Saturday 17 May 2025 00:27:18 +0000 (0:00:01.139) 0:00:22.810 ********** 2025-05-17 00:27:19.474225 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDbOz8JfVelZPYZvhEtxwYJOeHmdYNhp4Azq+qNr0DxT+0+qME+NIYMSYPwgpVH5xEah+ZA02iWEPLNg7N0wtV+P1c8af3o3MzgW0yX9RmIz0AC+e/B+EgFOfV1SfmGIC3E2OxnBrUj5gjMHN0GW5drnFk/xRMdv6Ex2Wegu2jDTSZjLAOrPGfdDvQUhQN4T5VF0pfUOSubfJXiyJYUqiFbJoQTHlStBy0yC2OS9iJqg3wG5WGRRtW/J5o/YbC1LNYu4nqYfXYccWkAfcEz+lQtTGIUe8vXGLW6QooGsVCe0i+7s5ZkZSp3HxeGGXQrimDoSop19XiiD6tb1MoAe822JWwCsU1U8PQxgrqll8PzFIxTSzYgRLGhBwhM9ylmP0ohGN6XLg42Vma4+HpU6D5oG+hVghLUcajaMrwaSzHc+fGIqLtLQjcdPLCZNYNEi/G3mWshPn8Ju5xECV89S8zXt5FTQb7N2MLn3cRXsMAjpamhCO6mhGI2ukaEJT2NCPc=) 2025-05-17 00:27:19.475849 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGFd2HHMOZMtxG1mQ9q11z7sQm0lexk4jZG6F0f5QA1xJLSQUAOBNh6cfDGjWLXy0zVLjs75VdojrLiAit4SpNg=) 2025-05-17 00:27:19.476458 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILd+ht9We+xLtlJ/8JLZEjwXbZ5oczn/ViSoAqoen3yr) 2025-05-17 00:27:19.477745 | orchestrator | 2025-05-17 00:27:19.477936 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-17 00:27:19.479187 | orchestrator | Saturday 17 May 2025 00:27:19 +0000 (0:00:01.083) 0:00:23.894 ********** 2025-05-17 00:27:20.566240 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLj80q15IqzzHKHHzNke6HY1uHSTtSwKTVZr2tZo8pkSX6uypDV75Kv06YgJrxrRfo7VoRYfG9Z3qHFLRZj/waA=) 2025-05-17 00:27:20.567000 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLEqDPU7kT+AOhIjIOmIoOlWuNFO2p4k99XKJGzVl/wsdnSW7iGq9C9TjRXKdDSZAzySJRlWi1VQwr4c7oMzg/j9ls3Ev052fmDmVlzal8ySUTs1BbMTfHTgreiuYu6UH+wrWfh3tmZAVfpm867muGnnFh/K1Zb3gSGxwljB2U+7Z/urn8sGmS6A5qQYkawxpj6+7OZwjt+fgRlZxMR4I5UYJaF/HHWKqfPZBRc0ER2F85pUGVuCYIlX22vn7wIXomHv3YGbFErqVxGczBeTW4ncwe7HCmW3LuEFe0HF29F6ZGvma0yr0/FIDRWQ0kslelFm6RLIgKRIskwJMaL+4e0gyHSkXMtB+c20gxJh1OZu4PmGA/Q3BGtInpPiiKpxNuDkVTB3Cz535Y5R05o5+MCnh+JUNCJ0qKj1wNzCEEgUA/qlw1e97Q3+9i3tiTDIdnRYYi4mdc7aQjHoY0sMqwdFyN0vNuaXVzuUHuQ7wdhD9cSYl267+BTWLmw08p8pE=) 2025-05-17 00:27:20.567648 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOWsq2T+FNXEzmJr9bojGAVKNtIQmLCGWeZ4vLDQBd44) 2025-05-17 00:27:20.568743 | orchestrator | 2025-05-17 00:27:20.569198 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-17 00:27:20.569519 | orchestrator | Saturday 17 May 2025 00:27:20 +0000 (0:00:01.093) 0:00:24.987 ********** 2025-05-17 00:27:21.642724 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDIyKPjUs/yY/W93lAcn1wAbZAyQLD3l/rrdz1s/4z6a) 2025-05-17 00:27:21.642937 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDclwj8e/tPzgwHsAv3S/ai+Vx3VysZDWrRXAGk3kwIVqAKppMZT8fRS8lS0Zl5RP7xHvODFIeTH0cU/92MUcEjPRY1FYHiaJ3QpKPEEsUplsXSgLlR6ZZ5m5CxbYxxuBclvHIudrCYrqPKAXcifo+Xax+YqME+sRLM2Elf9BoM9xSsr29bznheYp63DyXWDUj/Z0AtEP/VfdHmS/bpTZ3J6FUiMho7fRUTRcBO9XhSIRTsi4ruIqqAbFNzRoxhoBgCR2ZjX6ltpvdMAeNNeRhvPK7lqYaBZ7L3lBIXLM2T0rECDhi70n8lTtUHqtc7BIkVRe/o6Lb3FOuziMcEhDS8dmWMZCq15QLApe8Xz2EWQPWUa+EOIRYVicHiXoAc4vdGG6Kvm0XAuGYjQ27QOahJ7kseSUuuwktmvAOU7yh6pvOZCw8M8NxjiBKE+i+FLFZC7TuA9g41bOSekNC70/FpnNimRrF24sxVfroFYD8JgJMAKdwgbbC04GL1bV94NU=) 2025-05-17 00:27:21.642970 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBlI6fygiKlkvxW78QJI7q6Z51kQBdRnmKrLS99I6MRYpLLvXr99y8mYVkunKF+3IhGMEhW1XaWnzKU7kXYetOk=) 2025-05-17 00:27:21.643354 | orchestrator | 2025-05-17 00:27:21.645271 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-17 00:27:21.645705 | orchestrator | Saturday 17 May 2025 00:27:21 +0000 (0:00:01.075) 0:00:26.063 ********** 2025-05-17 00:27:22.693145 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIhFcro5VfdNnQlQHSyrxQXJYPXAfIFKgqpJeZFJkoQ9) 2025-05-17 00:27:22.694210 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDowDLBwh8trOHF1L1UYKvrFC7ppsbm2vYElGfm8NJPJ/UOjmMOf0jX1m89e+sG3SLV0aATM9ypIBAxgDEq8mMtMAz36tBHFHdfxykzEpYTDlg0091KiQPgWnrX3H+P8FxEyMrEd9MpWKHpi7BfcRRcA8IxwXx8fjHlqdS5bFvqeFxz9Cm1/9LITziheLC4CGaMTVg18RjslrcAjNazx2jBcJ8WoySB8r0vxGOmGzM8UWjTAOF7n5wMUN/KyuLXeew2jBjEapFzNybiUP7w8dFrRx3AgvfdfFJ2n2TYxDnSa7cQYFws+hQhTfSV1W/pG9zGT1jOSUxhmnnwpodzItePq7IciGwZneTxBLQ6kssZbv86MyfuaymASC8GKQADs4sLrpGLnWRg5rRMYBKnBCtrsfyXL3Y96mGpWMQ7zOzqn80OtxWiyy+J/NnG3VZaHJU7O/q+AsnXloqlyKrUFNyZxczN14xz5op+cmnSAXcVDPYUTrzTu1Tc4c8r8Hm1NIc=) 2025-05-17 00:27:22.694262 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBInB0HTqHMK2YCH+fM67FX0m6VpP4N2RcM6ooRTZtyusAcGog7KbpzVBe4RVzxn+gz0Z2Jy5qXO/Qh/dIHlYueo=) 2025-05-17 00:27:22.694710 | orchestrator | 2025-05-17 00:27:22.695303 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-17 00:27:22.697158 | orchestrator | Saturday 17 May 2025 00:27:22 +0000 (0:00:01.048) 0:00:27.112 ********** 2025-05-17 00:27:22.843080 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-17 00:27:22.843174 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-17 00:27:22.843406 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-17 00:27:22.844211 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-17 00:27:22.844816 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-17 00:27:22.846272 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-17 00:27:22.846956 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-17 00:27:22.847841 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:27:22.848357 | orchestrator | 2025-05-17 00:27:22.848851 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-17 00:27:22.850426 | orchestrator | Saturday 17 May 2025 00:27:22 +0000 (0:00:00.152) 0:00:27.264 ********** 2025-05-17 00:27:22.910358 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:27:22.910417 | orchestrator | 2025-05-17 00:27:22.910431 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-17 00:27:22.911042 | orchestrator | Saturday 17 May 2025 00:27:22 +0000 (0:00:00.066) 0:00:27.331 ********** 2025-05-17 00:27:22.976279 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:27:22.977346 | orchestrator | 2025-05-17 00:27:22.978112 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-17 00:27:22.979445 | orchestrator | Saturday 17 May 2025 00:27:22 +0000 (0:00:00.067) 0:00:27.398 ********** 2025-05-17 00:27:23.775569 | orchestrator | changed: [testbed-manager] 2025-05-17 00:27:23.775960 | orchestrator | 2025-05-17 00:27:23.777825 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:27:23.779101 | orchestrator | 2025-05-17 00:27:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:27:23.779127 | orchestrator | 2025-05-17 00:27:23 | INFO  | Please wait and do not abort execution. 2025-05-17 00:27:23.781734 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-17 00:27:23.782801 | orchestrator | 2025-05-17 00:27:23.783870 | orchestrator | Saturday 17 May 2025 00:27:23 +0000 (0:00:00.798) 0:00:28.197 ********** 2025-05-17 00:27:23.784678 | orchestrator | =============================================================================== 2025-05-17 00:27:23.785835 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.91s 2025-05-17 00:27:23.786308 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.37s 2025-05-17 00:27:23.787160 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2025-05-17 00:27:23.787713 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-05-17 00:27:23.788162 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-05-17 00:27:23.788445 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-05-17 00:27:23.789373 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-05-17 00:27:23.789395 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-05-17 00:27:23.789923 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-05-17 00:27:23.790397 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-17 00:27:23.790975 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-17 00:27:23.791527 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-17 00:27:23.791985 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-17 00:27:23.792552 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-17 00:27:23.792952 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-17 00:27:23.793518 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-17 00:27:23.793960 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.80s 2025-05-17 00:27:23.794585 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-05-17 00:27:23.794926 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-05-17 00:27:23.795533 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2025-05-17 00:27:24.216721 | orchestrator | + osism apply squid 2025-05-17 00:27:25.653108 | orchestrator | 2025-05-17 00:27:25 | INFO  | Task 2e95f749-8715-4501-9249-90e7ff6f9362 (squid) was prepared for execution. 2025-05-17 00:27:25.653187 | orchestrator | 2025-05-17 00:27:25 | INFO  | It takes a moment until task 2e95f749-8715-4501-9249-90e7ff6f9362 (squid) has been started and output is visible here. 2025-05-17 00:27:28.623270 | orchestrator | 2025-05-17 00:27:28.624197 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-17 00:27:28.625343 | orchestrator | 2025-05-17 00:27:28.625973 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-17 00:27:28.626489 | orchestrator | Saturday 17 May 2025 00:27:28 +0000 (0:00:00.104) 0:00:00.104 ********** 2025-05-17 00:27:28.711513 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-17 00:27:28.711835 | orchestrator | 2025-05-17 00:27:28.712714 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-17 00:27:28.714118 | orchestrator | Saturday 17 May 2025 00:27:28 +0000 (0:00:00.091) 0:00:00.196 ********** 2025-05-17 00:27:30.097814 | orchestrator | ok: [testbed-manager] 2025-05-17 00:27:30.098650 | orchestrator | 2025-05-17 00:27:30.099107 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-17 00:27:30.099565 | orchestrator | Saturday 17 May 2025 00:27:30 +0000 (0:00:01.384) 0:00:01.581 ********** 2025-05-17 00:27:31.269661 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-17 00:27:31.269999 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-17 00:27:31.270945 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-17 00:27:31.271558 | orchestrator | 2025-05-17 00:27:31.272619 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-17 00:27:31.273165 | orchestrator | Saturday 17 May 2025 00:27:31 +0000 (0:00:01.171) 0:00:02.753 ********** 2025-05-17 00:27:32.365286 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-17 00:27:32.365500 | orchestrator | 2025-05-17 00:27:32.367819 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-17 00:27:32.368447 | orchestrator | Saturday 17 May 2025 00:27:32 +0000 (0:00:01.094) 0:00:03.848 ********** 2025-05-17 00:27:32.716106 | orchestrator | ok: [testbed-manager] 2025-05-17 00:27:32.716255 | orchestrator | 2025-05-17 00:27:32.716337 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-17 00:27:32.716354 | orchestrator | Saturday 17 May 2025 00:27:32 +0000 (0:00:00.350) 0:00:04.199 ********** 2025-05-17 00:27:33.748092 | orchestrator | changed: [testbed-manager] 2025-05-17 00:27:33.748303 | orchestrator | 2025-05-17 00:27:33.749302 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-17 00:27:33.750247 | orchestrator | Saturday 17 May 2025 00:27:33 +0000 (0:00:01.032) 0:00:05.231 ********** 2025-05-17 00:28:05.818706 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-17 00:28:05.818870 | orchestrator | ok: [testbed-manager] 2025-05-17 00:28:05.818887 | orchestrator | 2025-05-17 00:28:05.818899 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-17 00:28:05.818910 | orchestrator | Saturday 17 May 2025 00:28:05 +0000 (0:00:32.064) 0:00:37.296 ********** 2025-05-17 00:28:18.799820 | orchestrator | changed: [testbed-manager] 2025-05-17 00:28:18.799941 | orchestrator | 2025-05-17 00:28:18.799958 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-17 00:28:18.800097 | orchestrator | Saturday 17 May 2025 00:28:18 +0000 (0:00:12.985) 0:00:50.282 ********** 2025-05-17 00:29:18.878319 | orchestrator | Pausing for 60 seconds 2025-05-17 00:29:18.878544 | orchestrator | changed: [testbed-manager] 2025-05-17 00:29:18.878616 | orchestrator | 2025-05-17 00:29:18.880867 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-17 00:29:18.881084 | orchestrator | Saturday 17 May 2025 00:29:18 +0000 (0:01:00.076) 0:01:50.358 ********** 2025-05-17 00:29:18.947315 | orchestrator | ok: [testbed-manager] 2025-05-17 00:29:18.947566 | orchestrator | 2025-05-17 00:29:18.948941 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-17 00:29:18.948969 | orchestrator | Saturday 17 May 2025 00:29:18 +0000 (0:00:00.073) 0:01:50.432 ********** 2025-05-17 00:29:19.577520 | orchestrator | changed: [testbed-manager] 2025-05-17 00:29:19.577625 | orchestrator | 2025-05-17 00:29:19.577879 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:29:19.577927 | orchestrator | 2025-05-17 00:29:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:29:19.577994 | orchestrator | 2025-05-17 00:29:19 | INFO  | Please wait and do not abort execution. 2025-05-17 00:29:19.579171 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:29:19.580070 | orchestrator | 2025-05-17 00:29:19.581412 | orchestrator | Saturday 17 May 2025 00:29:19 +0000 (0:00:00.630) 0:01:51.062 ********** 2025-05-17 00:29:19.581757 | orchestrator | =============================================================================== 2025-05-17 00:29:19.582966 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-05-17 00:29:19.583900 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.06s 2025-05-17 00:29:19.584565 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.99s 2025-05-17 00:29:19.585446 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.38s 2025-05-17 00:29:19.586063 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.17s 2025-05-17 00:29:19.586356 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.09s 2025-05-17 00:29:19.586965 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.03s 2025-05-17 00:29:19.587260 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2025-05-17 00:29:19.587744 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-05-17 00:29:19.588277 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-05-17 00:29:19.588644 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-05-17 00:29:19.969863 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-17 00:29:19.969960 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-05-17 00:29:19.972928 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-17 00:29:20.019643 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-17 00:29:20.019715 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-17 00:29:20.019728 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-05-17 00:29:20.022352 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-17 00:29:20.024925 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-17 00:29:20.031687 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-17 00:29:21.478170 | orchestrator | 2025-05-17 00:29:21 | INFO  | Task 938c5f25-81a4-49a7-bbf7-a9fd1f0aea13 (operator) was prepared for execution. 2025-05-17 00:29:21.478270 | orchestrator | 2025-05-17 00:29:21 | INFO  | It takes a moment until task 938c5f25-81a4-49a7-bbf7-a9fd1f0aea13 (operator) has been started and output is visible here. 2025-05-17 00:29:24.433739 | orchestrator | 2025-05-17 00:29:24.433887 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-17 00:29:24.433904 | orchestrator | 2025-05-17 00:29:24.433917 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-17 00:29:24.433928 | orchestrator | Saturday 17 May 2025 00:29:24 +0000 (0:00:00.089) 0:00:00.089 ********** 2025-05-17 00:29:27.775320 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:29:27.775487 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:29:27.775979 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:29:27.776435 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:29:27.776844 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:29:27.777360 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:29:27.777812 | orchestrator | 2025-05-17 00:29:27.780736 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-17 00:29:27.782799 | orchestrator | Saturday 17 May 2025 00:29:27 +0000 (0:00:03.346) 0:00:03.436 ********** 2025-05-17 00:29:28.540441 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:29:28.540612 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:29:28.541069 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:29:28.541462 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:29:28.542320 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:29:28.542758 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:29:28.543200 | orchestrator | 2025-05-17 00:29:28.543914 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-17 00:29:28.544337 | orchestrator | 2025-05-17 00:29:28.545055 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-17 00:29:28.545436 | orchestrator | Saturday 17 May 2025 00:29:28 +0000 (0:00:00.762) 0:00:04.198 ********** 2025-05-17 00:29:28.611536 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:29:28.635181 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:29:28.677586 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:29:28.729596 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:29:28.732871 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:29:28.733294 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:29:28.733936 | orchestrator | 2025-05-17 00:29:28.734902 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-17 00:29:28.735479 | orchestrator | Saturday 17 May 2025 00:29:28 +0000 (0:00:00.189) 0:00:04.388 ********** 2025-05-17 00:29:28.794515 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:29:28.825878 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:29:28.847149 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:29:28.899259 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:29:28.899880 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:29:28.900411 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:29:28.901853 | orchestrator | 2025-05-17 00:29:28.902153 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-17 00:29:28.902847 | orchestrator | Saturday 17 May 2025 00:29:28 +0000 (0:00:00.170) 0:00:04.559 ********** 2025-05-17 00:29:29.519283 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:29:29.519808 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:29:29.521170 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:29:29.522762 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:29:29.523210 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:29:29.524955 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:29:29.526165 | orchestrator | 2025-05-17 00:29:29.526846 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-17 00:29:29.527489 | orchestrator | Saturday 17 May 2025 00:29:29 +0000 (0:00:00.619) 0:00:05.178 ********** 2025-05-17 00:29:30.348298 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:29:30.348456 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:29:30.349565 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:29:30.351215 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:29:30.351967 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:29:30.353186 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:29:30.354231 | orchestrator | 2025-05-17 00:29:30.355255 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-17 00:29:30.355789 | orchestrator | Saturday 17 May 2025 00:29:30 +0000 (0:00:00.826) 0:00:06.005 ********** 2025-05-17 00:29:31.596703 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-17 00:29:31.599634 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-17 00:29:31.599677 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-17 00:29:31.601480 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-17 00:29:31.604012 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-17 00:29:31.605452 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-17 00:29:31.606263 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-17 00:29:31.607482 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-17 00:29:31.607796 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-17 00:29:31.608229 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-17 00:29:31.608469 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-17 00:29:31.608827 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-17 00:29:31.609227 | orchestrator | 2025-05-17 00:29:31.610112 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-17 00:29:31.610286 | orchestrator | Saturday 17 May 2025 00:29:31 +0000 (0:00:01.250) 0:00:07.255 ********** 2025-05-17 00:29:32.921576 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:29:32.921684 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:29:32.922612 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:29:32.922639 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:29:32.922650 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:29:32.922661 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:29:32.922673 | orchestrator | 2025-05-17 00:29:32.922686 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-17 00:29:32.922700 | orchestrator | Saturday 17 May 2025 00:29:32 +0000 (0:00:01.323) 0:00:08.578 ********** 2025-05-17 00:29:34.159118 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-17 00:29:34.167845 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-17 00:29:34.169395 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-17 00:29:34.230196 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-17 00:29:34.230352 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-17 00:29:34.230994 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-17 00:29:34.234556 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-17 00:29:34.235003 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-17 00:29:34.235498 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-17 00:29:34.236205 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-17 00:29:34.236669 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-17 00:29:34.237158 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-17 00:29:34.237636 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-17 00:29:34.238057 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-17 00:29:34.238477 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-17 00:29:34.238946 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-17 00:29:34.239501 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-17 00:29:34.239982 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-17 00:29:34.241132 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-17 00:29:34.241208 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-17 00:29:34.241711 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-17 00:29:34.242373 | orchestrator | 2025-05-17 00:29:34.242796 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-17 00:29:34.243576 | orchestrator | Saturday 17 May 2025 00:29:34 +0000 (0:00:01.311) 0:00:09.889 ********** 2025-05-17 00:29:34.801324 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:29:34.801553 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:29:34.802109 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:29:34.802658 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:29:34.803174 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:29:34.804536 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:29:34.807269 | orchestrator | 2025-05-17 00:29:34.807301 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-17 00:29:34.807315 | orchestrator | Saturday 17 May 2025 00:29:34 +0000 (0:00:00.570) 0:00:10.460 ********** 2025-05-17 00:29:34.862688 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:29:34.886381 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:29:34.909835 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:29:34.958401 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:29:34.958508 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:29:34.958572 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:29:34.959133 | orchestrator | 2025-05-17 00:29:34.959214 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-17 00:29:34.959810 | orchestrator | Saturday 17 May 2025 00:29:34 +0000 (0:00:00.157) 0:00:10.618 ********** 2025-05-17 00:29:35.661052 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-17 00:29:35.661156 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:29:35.661238 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-17 00:29:35.661987 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-17 00:29:35.662520 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:29:35.663251 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-17 00:29:35.664203 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:29:35.664880 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:29:35.665341 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-17 00:29:35.666120 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:29:35.667323 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-17 00:29:35.667420 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:29:35.668135 | orchestrator | 2025-05-17 00:29:35.668659 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-17 00:29:35.669329 | orchestrator | Saturday 17 May 2025 00:29:35 +0000 (0:00:00.701) 0:00:11.319 ********** 2025-05-17 00:29:35.733175 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:29:35.766629 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:29:35.789025 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:29:35.828664 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:29:35.831445 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:29:35.831477 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:29:35.833764 | orchestrator | 2025-05-17 00:29:35.835216 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-17 00:29:35.838120 | orchestrator | Saturday 17 May 2025 00:29:35 +0000 (0:00:00.168) 0:00:11.488 ********** 2025-05-17 00:29:35.871092 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:29:35.889877 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:29:35.909385 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:29:35.966388 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:29:35.967178 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:29:35.967690 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:29:35.971454 | orchestrator | 2025-05-17 00:29:35.971506 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-17 00:29:35.971525 | orchestrator | Saturday 17 May 2025 00:29:35 +0000 (0:00:00.138) 0:00:11.627 ********** 2025-05-17 00:29:36.019380 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:29:36.039345 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:29:36.088198 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:29:36.127104 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:29:36.127205 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:29:36.127403 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:29:36.127627 | orchestrator | 2025-05-17 00:29:36.128075 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-17 00:29:36.129332 | orchestrator | Saturday 17 May 2025 00:29:36 +0000 (0:00:00.158) 0:00:11.786 ********** 2025-05-17 00:29:36.820712 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:29:36.820931 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:29:36.821348 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:29:36.822272 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:29:36.822517 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:29:36.826330 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:29:36.826525 | orchestrator | 2025-05-17 00:29:36.827053 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-17 00:29:36.827527 | orchestrator | Saturday 17 May 2025 00:29:36 +0000 (0:00:00.687) 0:00:12.474 ********** 2025-05-17 00:29:36.895360 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:29:36.939055 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:29:36.967989 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:29:37.058245 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:29:37.059658 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:29:37.060390 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:29:37.060810 | orchestrator | 2025-05-17 00:29:37.061843 | orchestrator | 2025-05-17 00:29:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:29:37.061935 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:29:37.062231 | orchestrator | 2025-05-17 00:29:37 | INFO  | Please wait and do not abort execution. 2025-05-17 00:29:37.064794 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-17 00:29:37.065179 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-17 00:29:37.066008 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-17 00:29:37.066451 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-17 00:29:37.068713 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-17 00:29:37.068992 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-17 00:29:37.069382 | orchestrator | 2025-05-17 00:29:37.069724 | orchestrator | Saturday 17 May 2025 00:29:37 +0000 (0:00:00.244) 0:00:12.718 ********** 2025-05-17 00:29:37.070191 | orchestrator | =============================================================================== 2025-05-17 00:29:37.070511 | orchestrator | Gathering Facts --------------------------------------------------------- 3.35s 2025-05-17 00:29:37.070984 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.32s 2025-05-17 00:29:37.071486 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.31s 2025-05-17 00:29:37.071747 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.25s 2025-05-17 00:29:37.075152 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.83s 2025-05-17 00:29:37.075644 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2025-05-17 00:29:37.076316 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2025-05-17 00:29:37.077383 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.69s 2025-05-17 00:29:37.078326 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.62s 2025-05-17 00:29:37.078668 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2025-05-17 00:29:37.079318 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2025-05-17 00:29:37.080044 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2025-05-17 00:29:37.080215 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-05-17 00:29:37.080637 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2025-05-17 00:29:37.081548 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-05-17 00:29:37.081664 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-05-17 00:29:37.082083 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-05-17 00:29:37.471617 | orchestrator | + osism apply --environment custom facts 2025-05-17 00:29:38.808924 | orchestrator | 2025-05-17 00:29:38 | INFO  | Trying to run play facts in environment custom 2025-05-17 00:29:38.855758 | orchestrator | 2025-05-17 00:29:38 | INFO  | Task d1143df6-bf06-4e38-926a-81c3d43cd052 (facts) was prepared for execution. 2025-05-17 00:29:38.855891 | orchestrator | 2025-05-17 00:29:38 | INFO  | It takes a moment until task d1143df6-bf06-4e38-926a-81c3d43cd052 (facts) has been started and output is visible here. 2025-05-17 00:29:41.877964 | orchestrator | 2025-05-17 00:29:41.882060 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-17 00:29:41.882102 | orchestrator | 2025-05-17 00:29:41.882114 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-17 00:29:41.883071 | orchestrator | Saturday 17 May 2025 00:29:41 +0000 (0:00:00.099) 0:00:00.099 ********** 2025-05-17 00:29:43.038613 | orchestrator | ok: [testbed-manager] 2025-05-17 00:29:44.136758 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:29:44.137750 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:29:44.138672 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:29:44.139076 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:29:44.139515 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:29:44.140177 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:29:44.140465 | orchestrator | 2025-05-17 00:29:44.140973 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-17 00:29:44.141388 | orchestrator | Saturday 17 May 2025 00:29:44 +0000 (0:00:02.258) 0:00:02.358 ********** 2025-05-17 00:29:45.359768 | orchestrator | ok: [testbed-manager] 2025-05-17 00:29:46.210999 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:29:46.211182 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:29:46.211520 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:29:46.211845 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:29:46.212468 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:29:46.212589 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:29:46.213284 | orchestrator | 2025-05-17 00:29:46.213587 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-17 00:29:46.213890 | orchestrator | 2025-05-17 00:29:46.215904 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-17 00:29:46.217652 | orchestrator | Saturday 17 May 2025 00:29:46 +0000 (0:00:02.077) 0:00:04.436 ********** 2025-05-17 00:29:46.325342 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:29:46.325456 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:29:46.325541 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:29:46.325683 | orchestrator | 2025-05-17 00:29:46.326129 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-17 00:29:46.326461 | orchestrator | Saturday 17 May 2025 00:29:46 +0000 (0:00:00.115) 0:00:04.551 ********** 2025-05-17 00:29:46.463726 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:29:46.463882 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:29:46.463898 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:29:46.463910 | orchestrator | 2025-05-17 00:29:46.463922 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-17 00:29:46.463935 | orchestrator | Saturday 17 May 2025 00:29:46 +0000 (0:00:00.137) 0:00:04.689 ********** 2025-05-17 00:29:46.588386 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:29:46.592364 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:29:46.592423 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:29:46.592437 | orchestrator | 2025-05-17 00:29:46.592450 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-17 00:29:46.592538 | orchestrator | Saturday 17 May 2025 00:29:46 +0000 (0:00:00.123) 0:00:04.813 ********** 2025-05-17 00:29:46.749601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:29:46.749996 | orchestrator | 2025-05-17 00:29:46.750602 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-17 00:29:46.753703 | orchestrator | Saturday 17 May 2025 00:29:46 +0000 (0:00:00.162) 0:00:04.975 ********** 2025-05-17 00:29:47.245464 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:29:47.247192 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:29:47.248600 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:29:47.249827 | orchestrator | 2025-05-17 00:29:47.250796 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-17 00:29:47.251702 | orchestrator | Saturday 17 May 2025 00:29:47 +0000 (0:00:00.492) 0:00:05.468 ********** 2025-05-17 00:29:47.376345 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:29:47.377476 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:29:47.380901 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:29:47.381245 | orchestrator | 2025-05-17 00:29:47.381714 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-17 00:29:47.382503 | orchestrator | Saturday 17 May 2025 00:29:47 +0000 (0:00:00.131) 0:00:05.600 ********** 2025-05-17 00:29:48.357960 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:29:48.359383 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:29:48.360099 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:29:48.360388 | orchestrator | 2025-05-17 00:29:48.360842 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-17 00:29:48.364029 | orchestrator | Saturday 17 May 2025 00:29:48 +0000 (0:00:00.984) 0:00:06.584 ********** 2025-05-17 00:29:48.806420 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:29:48.806577 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:29:48.809847 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:29:48.809874 | orchestrator | 2025-05-17 00:29:48.809887 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-17 00:29:48.809900 | orchestrator | Saturday 17 May 2025 00:29:48 +0000 (0:00:00.446) 0:00:07.031 ********** 2025-05-17 00:29:49.860833 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:29:49.861000 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:29:49.861889 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:29:49.863080 | orchestrator | 2025-05-17 00:29:49.864041 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-17 00:29:49.864690 | orchestrator | Saturday 17 May 2025 00:29:49 +0000 (0:00:01.053) 0:00:08.085 ********** 2025-05-17 00:30:03.222939 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:30:03.225596 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:30:03.225629 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:30:03.225641 | orchestrator | 2025-05-17 00:30:03.225654 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-17 00:30:03.225668 | orchestrator | Saturday 17 May 2025 00:30:03 +0000 (0:00:13.354) 0:00:21.439 ********** 2025-05-17 00:30:03.298768 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:30:03.349203 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:30:03.349261 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:30:03.349274 | orchestrator | 2025-05-17 00:30:03.349287 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-17 00:30:03.349300 | orchestrator | Saturday 17 May 2025 00:30:03 +0000 (0:00:00.128) 0:00:21.568 ********** 2025-05-17 00:30:10.558304 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:30:10.558737 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:30:10.560532 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:30:10.561901 | orchestrator | 2025-05-17 00:30:10.563170 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-17 00:30:10.563883 | orchestrator | Saturday 17 May 2025 00:30:10 +0000 (0:00:07.213) 0:00:28.782 ********** 2025-05-17 00:30:10.976423 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:10.977906 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:10.978575 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:10.979582 | orchestrator | 2025-05-17 00:30:10.980071 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-17 00:30:10.981014 | orchestrator | Saturday 17 May 2025 00:30:10 +0000 (0:00:00.416) 0:00:29.199 ********** 2025-05-17 00:30:14.411948 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-17 00:30:14.412083 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-17 00:30:14.412400 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-17 00:30:14.414167 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-17 00:30:14.415191 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-17 00:30:14.416076 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-17 00:30:14.417026 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-17 00:30:14.417413 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-17 00:30:14.418661 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-17 00:30:14.419600 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-17 00:30:14.420399 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-17 00:30:14.421195 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-17 00:30:14.422064 | orchestrator | 2025-05-17 00:30:14.422958 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-17 00:30:14.423291 | orchestrator | Saturday 17 May 2025 00:30:14 +0000 (0:00:03.433) 0:00:32.632 ********** 2025-05-17 00:30:15.573918 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:15.574886 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:15.576579 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:15.576940 | orchestrator | 2025-05-17 00:30:15.578415 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-17 00:30:15.578937 | orchestrator | 2025-05-17 00:30:15.579651 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-17 00:30:15.580338 | orchestrator | Saturday 17 May 2025 00:30:15 +0000 (0:00:01.165) 0:00:33.797 ********** 2025-05-17 00:30:17.270261 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:30:20.556281 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:30:20.556393 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:30:20.556642 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:20.557288 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:20.557680 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:20.557978 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:20.559030 | orchestrator | 2025-05-17 00:30:20.559519 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:30:20.560014 | orchestrator | 2025-05-17 00:30:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:30:20.560156 | orchestrator | 2025-05-17 00:30:20 | INFO  | Please wait and do not abort execution. 2025-05-17 00:30:20.560837 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:30:20.561395 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:30:20.562211 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:30:20.563224 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:30:20.563547 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:30:20.564056 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:30:20.564540 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:30:20.564909 | orchestrator | 2025-05-17 00:30:20.565872 | orchestrator | Saturday 17 May 2025 00:30:20 +0000 (0:00:04.985) 0:00:38.782 ********** 2025-05-17 00:30:20.566174 | orchestrator | =============================================================================== 2025-05-17 00:30:20.566340 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.35s 2025-05-17 00:30:20.566728 | orchestrator | Install required packages (Debian) -------------------------------------- 7.21s 2025-05-17 00:30:20.567265 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.99s 2025-05-17 00:30:20.567832 | orchestrator | Copy fact files --------------------------------------------------------- 3.43s 2025-05-17 00:30:20.568035 | orchestrator | Create custom facts directory ------------------------------------------- 2.26s 2025-05-17 00:30:20.568354 | orchestrator | Copy fact file ---------------------------------------------------------- 2.08s 2025-05-17 00:30:20.568866 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.17s 2025-05-17 00:30:20.570161 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.05s 2025-05-17 00:30:20.570395 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.98s 2025-05-17 00:30:20.571032 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.49s 2025-05-17 00:30:20.571420 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2025-05-17 00:30:20.571913 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-05-17 00:30:20.572349 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-05-17 00:30:20.572751 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.14s 2025-05-17 00:30:20.573477 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2025-05-17 00:30:20.573700 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.13s 2025-05-17 00:30:20.574129 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.12s 2025-05-17 00:30:20.574332 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-05-17 00:30:20.999332 | orchestrator | + osism apply bootstrap 2025-05-17 00:30:22.419869 | orchestrator | 2025-05-17 00:30:22 | INFO  | Task aa3b6b4f-0ff3-495c-827c-bd4921e9fc91 (bootstrap) was prepared for execution. 2025-05-17 00:30:22.419975 | orchestrator | 2025-05-17 00:30:22 | INFO  | It takes a moment until task aa3b6b4f-0ff3-495c-827c-bd4921e9fc91 (bootstrap) has been started and output is visible here. 2025-05-17 00:30:25.647933 | orchestrator | 2025-05-17 00:30:25.648133 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-17 00:30:25.648900 | orchestrator | 2025-05-17 00:30:25.649201 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-17 00:30:25.649966 | orchestrator | Saturday 17 May 2025 00:30:25 +0000 (0:00:00.104) 0:00:00.104 ********** 2025-05-17 00:30:25.717094 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:25.741492 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:25.768249 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:25.793548 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:25.869433 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:30:25.870080 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:30:25.870963 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:30:25.871883 | orchestrator | 2025-05-17 00:30:25.872659 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-17 00:30:25.873288 | orchestrator | 2025-05-17 00:30:25.874297 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-17 00:30:25.874833 | orchestrator | Saturday 17 May 2025 00:30:25 +0000 (0:00:00.225) 0:00:00.329 ********** 2025-05-17 00:30:29.541109 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:30:29.541220 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:30:29.541235 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:30:29.541709 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:29.541953 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:29.542213 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:29.543406 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:29.546401 | orchestrator | 2025-05-17 00:30:29.546432 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-17 00:30:29.546446 | orchestrator | 2025-05-17 00:30:29.546458 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-17 00:30:29.546470 | orchestrator | Saturday 17 May 2025 00:30:29 +0000 (0:00:03.670) 0:00:04.000 ********** 2025-05-17 00:30:29.614900 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-17 00:30:29.647434 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-17 00:30:29.647510 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-17 00:30:29.647523 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-17 00:30:29.648034 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:30:29.686315 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-17 00:30:29.686378 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-17 00:30:29.686438 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:30:29.686648 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-17 00:30:29.686931 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-17 00:30:29.687123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:30:29.732457 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-17 00:30:29.733010 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-17 00:30:29.733052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-17 00:30:29.733231 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-17 00:30:29.733434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-17 00:30:29.733752 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-17 00:30:29.736640 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-17 00:30:29.993838 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:30:29.994006 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-17 00:30:29.994927 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-17 00:30:29.998206 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:30:29.998237 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-17 00:30:29.998249 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-17 00:30:29.998261 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-17 00:30:29.998715 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-17 00:30:29.999700 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:30:30.000437 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-17 00:30:30.001341 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-17 00:30:30.001986 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-17 00:30:30.002850 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:30:30.003826 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-17 00:30:30.004220 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-17 00:30:30.004949 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:30:30.005571 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-17 00:30:30.006765 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-17 00:30:30.007318 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-17 00:30:30.007675 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:30:30.008525 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-17 00:30:30.009015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-17 00:30:30.009518 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-17 00:30:30.009891 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:30:30.010362 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-17 00:30:30.010759 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-17 00:30:30.011186 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-17 00:30:30.011608 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-17 00:30:30.012169 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-17 00:30:30.012534 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-17 00:30:30.013261 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-17 00:30:30.013518 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:30:30.013869 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:30:30.014391 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-17 00:30:30.014728 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-17 00:30:30.015173 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-17 00:30:30.015594 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-17 00:30:30.015958 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:30:30.016414 | orchestrator | 2025-05-17 00:30:30.017013 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-17 00:30:30.017231 | orchestrator | 2025-05-17 00:30:30.017622 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-05-17 00:30:30.018068 | orchestrator | Saturday 17 May 2025 00:30:29 +0000 (0:00:00.453) 0:00:04.453 ********** 2025-05-17 00:30:30.090498 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:30.114475 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:30.138735 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:30.168253 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:30.211565 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:30:30.211812 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:30:30.213609 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:30:30.214384 | orchestrator | 2025-05-17 00:30:30.215315 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-17 00:30:30.216260 | orchestrator | Saturday 17 May 2025 00:30:30 +0000 (0:00:00.217) 0:00:04.671 ********** 2025-05-17 00:30:31.425432 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:30:31.427137 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:31.428272 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:30:31.429637 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:31.430184 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:30:31.431370 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:31.432322 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:31.433153 | orchestrator | 2025-05-17 00:30:31.433726 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-17 00:30:31.434579 | orchestrator | Saturday 17 May 2025 00:30:31 +0000 (0:00:01.211) 0:00:05.883 ********** 2025-05-17 00:30:32.602939 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:32.603038 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:32.603053 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:32.603995 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:30:32.605151 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:30:32.605646 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:30:32.606563 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:32.609145 | orchestrator | 2025-05-17 00:30:32.609588 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-17 00:30:32.610365 | orchestrator | Saturday 17 May 2025 00:30:32 +0000 (0:00:01.173) 0:00:07.056 ********** 2025-05-17 00:30:32.878075 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:30:32.878519 | orchestrator | 2025-05-17 00:30:32.879506 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-17 00:30:32.880999 | orchestrator | Saturday 17 May 2025 00:30:32 +0000 (0:00:00.280) 0:00:07.336 ********** 2025-05-17 00:30:35.095401 | orchestrator | changed: [testbed-manager] 2025-05-17 00:30:35.096713 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:30:35.096743 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:30:35.096755 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:30:35.098811 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:30:35.099448 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:30:35.100350 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:30:35.101200 | orchestrator | 2025-05-17 00:30:35.101759 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-17 00:30:35.102367 | orchestrator | Saturday 17 May 2025 00:30:35 +0000 (0:00:02.214) 0:00:09.551 ********** 2025-05-17 00:30:35.177259 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:30:35.336841 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:30:35.337246 | orchestrator | 2025-05-17 00:30:35.337818 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-17 00:30:35.338724 | orchestrator | Saturday 17 May 2025 00:30:35 +0000 (0:00:00.244) 0:00:09.796 ********** 2025-05-17 00:30:36.303915 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:30:36.304919 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:30:36.304951 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:30:36.305516 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:30:36.305538 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:30:36.306417 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:30:36.307243 | orchestrator | 2025-05-17 00:30:36.307960 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-17 00:30:36.308493 | orchestrator | Saturday 17 May 2025 00:30:36 +0000 (0:00:00.965) 0:00:10.761 ********** 2025-05-17 00:30:36.371898 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:30:36.865278 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:30:36.865767 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:30:36.867459 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:30:36.867829 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:30:36.868626 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:30:36.869346 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:30:36.869741 | orchestrator | 2025-05-17 00:30:36.870219 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-17 00:30:36.870695 | orchestrator | Saturday 17 May 2025 00:30:36 +0000 (0:00:00.561) 0:00:11.323 ********** 2025-05-17 00:30:36.956109 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:30:36.985925 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:30:37.006690 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:30:37.282830 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:30:37.283050 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:30:37.283381 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:30:37.283588 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:37.284677 | orchestrator | 2025-05-17 00:30:37.291989 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-17 00:30:37.292894 | orchestrator | Saturday 17 May 2025 00:30:37 +0000 (0:00:00.417) 0:00:11.741 ********** 2025-05-17 00:30:37.353595 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:30:37.377399 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:30:37.400358 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:30:37.424392 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:30:37.493511 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:30:37.493565 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:30:37.494230 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:30:37.494935 | orchestrator | 2025-05-17 00:30:37.495823 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-17 00:30:37.497042 | orchestrator | Saturday 17 May 2025 00:30:37 +0000 (0:00:00.211) 0:00:11.953 ********** 2025-05-17 00:30:37.762667 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:30:37.762917 | orchestrator | 2025-05-17 00:30:37.763890 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-17 00:30:37.764502 | orchestrator | Saturday 17 May 2025 00:30:37 +0000 (0:00:00.268) 0:00:12.222 ********** 2025-05-17 00:30:38.054519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:30:38.054669 | orchestrator | 2025-05-17 00:30:38.055062 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-17 00:30:38.056491 | orchestrator | Saturday 17 May 2025 00:30:38 +0000 (0:00:00.291) 0:00:12.513 ********** 2025-05-17 00:30:39.294234 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:39.294381 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:30:39.294398 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:30:39.294478 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:30:39.294492 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:39.294726 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:39.299639 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:39.299734 | orchestrator | 2025-05-17 00:30:39.303560 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-17 00:30:39.304376 | orchestrator | Saturday 17 May 2025 00:30:39 +0000 (0:00:01.239) 0:00:13.753 ********** 2025-05-17 00:30:39.367475 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:30:39.390412 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:30:39.414286 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:30:39.441036 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:30:39.492154 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:30:39.492744 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:30:39.493216 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:30:39.495871 | orchestrator | 2025-05-17 00:30:39.496393 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-17 00:30:39.496862 | orchestrator | Saturday 17 May 2025 00:30:39 +0000 (0:00:00.199) 0:00:13.952 ********** 2025-05-17 00:30:39.986741 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:39.987021 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:39.988059 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:39.990727 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:39.991431 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:30:39.992280 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:30:39.993052 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:30:39.993463 | orchestrator | 2025-05-17 00:30:39.994129 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-17 00:30:39.994602 | orchestrator | Saturday 17 May 2025 00:30:39 +0000 (0:00:00.493) 0:00:14.446 ********** 2025-05-17 00:30:40.063488 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:30:40.112612 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:30:40.148862 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:30:40.230766 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:30:40.231528 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:30:40.232636 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:30:40.238267 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:30:40.238389 | orchestrator | 2025-05-17 00:30:40.239119 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-17 00:30:40.239842 | orchestrator | Saturday 17 May 2025 00:30:40 +0000 (0:00:00.244) 0:00:14.690 ********** 2025-05-17 00:30:40.745650 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:40.748351 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:30:40.749767 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:30:40.750192 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:30:40.751514 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:30:40.752244 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:30:40.753202 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:30:40.753853 | orchestrator | 2025-05-17 00:30:40.754575 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-17 00:30:40.755143 | orchestrator | Saturday 17 May 2025 00:30:40 +0000 (0:00:00.512) 0:00:15.202 ********** 2025-05-17 00:30:41.811008 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:41.811865 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:30:41.811950 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:30:41.815313 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:30:41.815351 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:30:41.815362 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:30:41.815373 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:30:41.815384 | orchestrator | 2025-05-17 00:30:41.815935 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-17 00:30:41.816498 | orchestrator | Saturday 17 May 2025 00:30:41 +0000 (0:00:01.067) 0:00:16.269 ********** 2025-05-17 00:30:42.945269 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:42.945744 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:42.946548 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:30:42.947885 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:30:42.948210 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:42.948592 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:42.949150 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:30:42.949749 | orchestrator | 2025-05-17 00:30:42.950669 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-17 00:30:42.950991 | orchestrator | Saturday 17 May 2025 00:30:42 +0000 (0:00:01.133) 0:00:17.403 ********** 2025-05-17 00:30:43.217061 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:30:43.217161 | orchestrator | 2025-05-17 00:30:43.217518 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-17 00:30:43.218110 | orchestrator | Saturday 17 May 2025 00:30:43 +0000 (0:00:00.273) 0:00:17.676 ********** 2025-05-17 00:30:43.289204 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:30:44.638518 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:30:44.638625 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:30:44.638955 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:30:44.639956 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:30:44.640589 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:30:44.644247 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:30:44.644595 | orchestrator | 2025-05-17 00:30:44.645225 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-17 00:30:44.645973 | orchestrator | Saturday 17 May 2025 00:30:44 +0000 (0:00:01.420) 0:00:19.096 ********** 2025-05-17 00:30:44.719160 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:44.744877 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:44.777003 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:44.796986 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:44.856442 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:30:44.856540 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:30:44.857111 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:30:44.857939 | orchestrator | 2025-05-17 00:30:44.859333 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-17 00:30:44.859359 | orchestrator | Saturday 17 May 2025 00:30:44 +0000 (0:00:00.219) 0:00:19.316 ********** 2025-05-17 00:30:44.929706 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:44.955181 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:44.974911 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:45.000271 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:45.065311 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:30:45.065889 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:30:45.066471 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:30:45.067136 | orchestrator | 2025-05-17 00:30:45.073289 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-17 00:30:45.073335 | orchestrator | Saturday 17 May 2025 00:30:45 +0000 (0:00:00.209) 0:00:19.525 ********** 2025-05-17 00:30:45.147990 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:45.177903 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:45.199487 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:45.225062 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:45.288998 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:30:45.289164 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:30:45.289505 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:30:45.291233 | orchestrator | 2025-05-17 00:30:45.294959 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-17 00:30:45.295329 | orchestrator | Saturday 17 May 2025 00:30:45 +0000 (0:00:00.223) 0:00:19.748 ********** 2025-05-17 00:30:45.584203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:30:45.584302 | orchestrator | 2025-05-17 00:30:45.584318 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-17 00:30:45.584578 | orchestrator | Saturday 17 May 2025 00:30:45 +0000 (0:00:00.287) 0:00:20.036 ********** 2025-05-17 00:30:46.098963 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:46.099068 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:46.099672 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:46.099813 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:30:46.101265 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:30:46.101691 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:30:46.102582 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:46.104043 | orchestrator | 2025-05-17 00:30:46.104087 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-17 00:30:46.104103 | orchestrator | Saturday 17 May 2025 00:30:46 +0000 (0:00:00.521) 0:00:20.557 ********** 2025-05-17 00:30:46.194449 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:30:46.218495 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:30:46.246654 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:30:46.307914 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:30:46.308585 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:30:46.310014 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:30:46.311804 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:30:46.313691 | orchestrator | 2025-05-17 00:30:46.315041 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-17 00:30:46.315803 | orchestrator | Saturday 17 May 2025 00:30:46 +0000 (0:00:00.210) 0:00:20.767 ********** 2025-05-17 00:30:47.318311 | orchestrator | changed: [testbed-manager] 2025-05-17 00:30:47.322183 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:47.322218 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:47.322230 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:30:47.322242 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:47.322292 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:30:47.323109 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:30:47.324088 | orchestrator | 2025-05-17 00:30:47.324984 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-17 00:30:47.325862 | orchestrator | Saturday 17 May 2025 00:30:47 +0000 (0:00:01.008) 0:00:21.776 ********** 2025-05-17 00:30:47.827459 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:47.829999 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:47.830927 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:47.831754 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:30:47.832518 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:47.833148 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:30:47.834412 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:30:47.835228 | orchestrator | 2025-05-17 00:30:47.835893 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-17 00:30:47.836436 | orchestrator | Saturday 17 May 2025 00:30:47 +0000 (0:00:00.508) 0:00:22.285 ********** 2025-05-17 00:30:48.913490 | orchestrator | ok: [testbed-manager] 2025-05-17 00:30:48.914558 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:30:48.915737 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:30:48.916803 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:30:48.918582 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:30:48.919703 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:30:48.921188 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:30:48.924272 | orchestrator | 2025-05-17 00:30:48.924685 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-17 00:30:48.926434 | orchestrator | Saturday 17 May 2025 00:30:48 +0000 (0:00:01.086) 0:00:23.371 ********** 2025-05-17 00:31:01.660710 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:31:01.660875 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:31:01.660893 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:31:01.660975 | orchestrator | changed: [testbed-manager] 2025-05-17 00:31:01.662178 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:31:01.663359 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:31:01.663819 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:31:01.665481 | orchestrator | 2025-05-17 00:31:01.667341 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-17 00:31:01.667868 | orchestrator | Saturday 17 May 2025 00:31:01 +0000 (0:00:12.743) 0:00:36.115 ********** 2025-05-17 00:31:01.727923 | orchestrator | ok: [testbed-manager] 2025-05-17 00:31:01.753587 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:31:01.772120 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:31:01.798199 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:31:01.858301 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:31:01.858739 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:31:01.859702 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:31:01.862237 | orchestrator | 2025-05-17 00:31:01.863159 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-17 00:31:01.866674 | orchestrator | Saturday 17 May 2025 00:31:01 +0000 (0:00:00.203) 0:00:36.319 ********** 2025-05-17 00:31:01.932420 | orchestrator | ok: [testbed-manager] 2025-05-17 00:31:01.954719 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:31:01.982508 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:31:02.001573 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:31:02.060021 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:31:02.060160 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:31:02.060686 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:31:02.061153 | orchestrator | 2025-05-17 00:31:02.062488 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-17 00:31:02.062839 | orchestrator | Saturday 17 May 2025 00:31:02 +0000 (0:00:00.201) 0:00:36.520 ********** 2025-05-17 00:31:02.132713 | orchestrator | ok: [testbed-manager] 2025-05-17 00:31:02.161009 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:31:02.185244 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:31:02.210391 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:31:02.276554 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:31:02.277868 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:31:02.280265 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:31:02.281088 | orchestrator | 2025-05-17 00:31:02.282159 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-17 00:31:02.282455 | orchestrator | Saturday 17 May 2025 00:31:02 +0000 (0:00:00.215) 0:00:36.736 ********** 2025-05-17 00:31:02.542493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:31:02.543403 | orchestrator | 2025-05-17 00:31:02.544300 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-17 00:31:02.545312 | orchestrator | Saturday 17 May 2025 00:31:02 +0000 (0:00:00.266) 0:00:37.002 ********** 2025-05-17 00:31:04.215328 | orchestrator | ok: [testbed-manager] 2025-05-17 00:31:04.216652 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:31:04.216685 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:31:04.216698 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:31:04.216710 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:31:04.216880 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:31:04.217386 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:31:04.219972 | orchestrator | 2025-05-17 00:31:04.221145 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-17 00:31:04.222402 | orchestrator | Saturday 17 May 2025 00:31:04 +0000 (0:00:01.671) 0:00:38.674 ********** 2025-05-17 00:31:05.314885 | orchestrator | changed: [testbed-manager] 2025-05-17 00:31:05.314994 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:31:05.315008 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:31:05.315095 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:31:05.315271 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:31:05.316299 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:31:05.316569 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:31:05.317185 | orchestrator | 2025-05-17 00:31:05.318733 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-17 00:31:05.321837 | orchestrator | Saturday 17 May 2025 00:31:05 +0000 (0:00:01.094) 0:00:39.768 ********** 2025-05-17 00:31:06.092626 | orchestrator | ok: [testbed-manager] 2025-05-17 00:31:06.095676 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:31:06.095711 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:31:06.097111 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:31:06.098105 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:31:06.099245 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:31:06.100199 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:31:06.101242 | orchestrator | 2025-05-17 00:31:06.101285 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-17 00:31:06.101430 | orchestrator | Saturday 17 May 2025 00:31:06 +0000 (0:00:00.781) 0:00:40.550 ********** 2025-05-17 00:31:06.371464 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:31:06.373753 | orchestrator | 2025-05-17 00:31:06.374368 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-17 00:31:06.375273 | orchestrator | Saturday 17 May 2025 00:31:06 +0000 (0:00:00.278) 0:00:40.829 ********** 2025-05-17 00:31:07.370424 | orchestrator | changed: [testbed-manager] 2025-05-17 00:31:07.371584 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:31:07.372652 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:31:07.373219 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:31:07.374116 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:31:07.374932 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:31:07.375543 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:31:07.376056 | orchestrator | 2025-05-17 00:31:07.376727 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-17 00:31:07.377176 | orchestrator | Saturday 17 May 2025 00:31:07 +0000 (0:00:01.000) 0:00:41.829 ********** 2025-05-17 00:31:07.467193 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:31:07.499960 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:31:07.520962 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:31:07.652930 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:31:07.653107 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:31:07.653711 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:31:07.654477 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:31:07.655426 | orchestrator | 2025-05-17 00:31:07.658988 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-17 00:31:07.659548 | orchestrator | Saturday 17 May 2025 00:31:07 +0000 (0:00:00.283) 0:00:42.112 ********** 2025-05-17 00:31:19.065232 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:31:19.065356 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:31:19.065372 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:31:19.065384 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:31:19.065395 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:31:19.065406 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:31:19.065417 | orchestrator | changed: [testbed-manager] 2025-05-17 00:31:19.065498 | orchestrator | 2025-05-17 00:31:19.066002 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-17 00:31:19.066627 | orchestrator | Saturday 17 May 2025 00:31:19 +0000 (0:00:11.403) 0:00:53.516 ********** 2025-05-17 00:31:20.658760 | orchestrator | ok: [testbed-manager] 2025-05-17 00:31:20.658950 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:31:20.659034 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:31:20.659576 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:31:20.662885 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:31:20.663247 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:31:20.663757 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:31:20.664524 | orchestrator | 2025-05-17 00:31:20.664986 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-17 00:31:20.665689 | orchestrator | Saturday 17 May 2025 00:31:20 +0000 (0:00:01.601) 0:00:55.117 ********** 2025-05-17 00:31:21.543558 | orchestrator | ok: [testbed-manager] 2025-05-17 00:31:21.543863 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:31:21.548134 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:31:21.548856 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:31:21.550118 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:31:21.552220 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:31:21.552256 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:31:21.552268 | orchestrator | 2025-05-17 00:31:21.553085 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-17 00:31:21.553657 | orchestrator | Saturday 17 May 2025 00:31:21 +0000 (0:00:00.884) 0:00:56.001 ********** 2025-05-17 00:31:21.631142 | orchestrator | ok: [testbed-manager] 2025-05-17 00:31:21.658833 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:31:21.685412 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:31:21.708271 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:31:21.762832 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:31:21.763329 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:31:21.765101 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:31:21.765131 | orchestrator | 2025-05-17 00:31:21.765946 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-17 00:31:21.766702 | orchestrator | Saturday 17 May 2025 00:31:21 +0000 (0:00:00.220) 0:00:56.222 ********** 2025-05-17 00:31:21.834705 | orchestrator | ok: [testbed-manager] 2025-05-17 00:31:21.862993 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:31:21.885577 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:31:21.913288 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:31:21.975481 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:31:21.976482 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:31:21.977116 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:31:21.977664 | orchestrator | 2025-05-17 00:31:21.978825 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-17 00:31:21.979376 | orchestrator | Saturday 17 May 2025 00:31:21 +0000 (0:00:00.212) 0:00:56.435 ********** 2025-05-17 00:31:22.255858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:31:22.256556 | orchestrator | 2025-05-17 00:31:22.258322 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-17 00:31:22.259153 | orchestrator | Saturday 17 May 2025 00:31:22 +0000 (0:00:00.278) 0:00:56.713 ********** 2025-05-17 00:31:23.794414 | orchestrator | ok: [testbed-manager] 2025-05-17 00:31:23.794894 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:31:23.796123 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:31:23.797245 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:31:23.797590 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:31:23.798353 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:31:23.799800 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:31:23.800149 | orchestrator | 2025-05-17 00:31:23.800807 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-17 00:31:23.801233 | orchestrator | Saturday 17 May 2025 00:31:23 +0000 (0:00:01.538) 0:00:58.252 ********** 2025-05-17 00:31:24.353119 | orchestrator | changed: [testbed-manager] 2025-05-17 00:31:24.353415 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:31:24.356648 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:31:24.356698 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:31:24.356713 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:31:24.357940 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:31:24.358140 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:31:24.359675 | orchestrator | 2025-05-17 00:31:24.360939 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-17 00:31:24.362173 | orchestrator | Saturday 17 May 2025 00:31:24 +0000 (0:00:00.558) 0:00:58.811 ********** 2025-05-17 00:31:24.424697 | orchestrator | ok: [testbed-manager] 2025-05-17 00:31:24.448509 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:31:24.472733 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:31:24.495610 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:31:24.548522 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:31:24.548917 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:31:24.549420 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:31:24.550244 | orchestrator | 2025-05-17 00:31:24.550652 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-17 00:31:24.552002 | orchestrator | Saturday 17 May 2025 00:31:24 +0000 (0:00:00.197) 0:00:59.008 ********** 2025-05-17 00:31:25.635017 | orchestrator | ok: [testbed-manager] 2025-05-17 00:31:25.635127 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:31:25.635868 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:31:25.637225 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:31:25.638071 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:31:25.638669 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:31:25.639291 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:31:25.640148 | orchestrator | 2025-05-17 00:31:25.640837 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-17 00:31:25.641878 | orchestrator | Saturday 17 May 2025 00:31:25 +0000 (0:00:01.083) 0:01:00.092 ********** 2025-05-17 00:31:27.203418 | orchestrator | changed: [testbed-manager] 2025-05-17 00:31:27.203595 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:31:27.204666 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:31:27.205836 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:31:27.206382 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:31:27.208020 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:31:27.208856 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:31:27.209375 | orchestrator | 2025-05-17 00:31:27.210157 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-17 00:31:27.210654 | orchestrator | Saturday 17 May 2025 00:31:27 +0000 (0:00:01.569) 0:01:01.661 ********** 2025-05-17 00:31:29.338207 | orchestrator | ok: [testbed-manager] 2025-05-17 00:31:29.339183 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:31:29.340297 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:31:29.341136 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:31:29.341681 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:31:29.342953 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:31:29.344165 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:31:29.344187 | orchestrator | 2025-05-17 00:31:29.344952 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-17 00:31:29.345625 | orchestrator | Saturday 17 May 2025 00:31:29 +0000 (0:00:02.134) 0:01:03.796 ********** 2025-05-17 00:32:08.998976 | orchestrator | ok: [testbed-manager] 2025-05-17 00:32:08.999098 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:32:08.999114 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:32:08.999126 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:32:08.999137 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:32:08.999148 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:32:08.999329 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:32:08.999348 | orchestrator | 2025-05-17 00:32:08.999361 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-17 00:32:08.999374 | orchestrator | Saturday 17 May 2025 00:32:08 +0000 (0:00:39.651) 0:01:43.447 ********** 2025-05-17 00:33:30.390405 | orchestrator | changed: [testbed-manager] 2025-05-17 00:33:30.390527 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:33:30.390543 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:33:30.390554 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:33:30.390720 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:33:30.390741 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:33:30.391350 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:33:30.391963 | orchestrator | 2025-05-17 00:33:30.392475 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-17 00:33:30.392908 | orchestrator | Saturday 17 May 2025 00:33:30 +0000 (0:01:21.396) 0:03:04.843 ********** 2025-05-17 00:33:32.053991 | orchestrator | ok: [testbed-manager] 2025-05-17 00:33:32.055736 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:33:32.056300 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:33:32.056767 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:33:32.057950 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:33:32.059134 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:33:32.059520 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:33:32.060110 | orchestrator | 2025-05-17 00:33:32.060763 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-17 00:33:32.061164 | orchestrator | Saturday 17 May 2025 00:33:32 +0000 (0:00:01.666) 0:03:06.510 ********** 2025-05-17 00:33:43.459277 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:33:43.459399 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:33:43.459415 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:33:43.460585 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:33:43.461948 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:33:43.462879 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:33:43.463306 | orchestrator | changed: [testbed-manager] 2025-05-17 00:33:43.463812 | orchestrator | 2025-05-17 00:33:43.464364 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-17 00:33:43.464806 | orchestrator | Saturday 17 May 2025 00:33:43 +0000 (0:00:11.403) 0:03:17.913 ********** 2025-05-17 00:33:43.838386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-17 00:33:43.838929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-17 00:33:43.839536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-17 00:33:43.839886 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-17 00:33:43.843470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-17 00:33:43.844271 | orchestrator | 2025-05-17 00:33:43.844915 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-17 00:33:43.845444 | orchestrator | Saturday 17 May 2025 00:33:43 +0000 (0:00:00.383) 0:03:18.297 ********** 2025-05-17 00:33:43.895557 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-17 00:33:43.934160 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:33:43.934623 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-17 00:33:43.935317 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-17 00:33:43.963981 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:33:43.964013 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-17 00:33:43.990145 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:33:44.015111 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:33:44.557854 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-17 00:33:44.557971 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-17 00:33:44.558236 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-17 00:33:44.558665 | orchestrator | 2025-05-17 00:33:44.559382 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-17 00:33:44.560207 | orchestrator | Saturday 17 May 2025 00:33:44 +0000 (0:00:00.716) 0:03:19.014 ********** 2025-05-17 00:33:44.608732 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-17 00:33:44.608860 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-17 00:33:44.655232 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-17 00:33:44.655359 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-17 00:33:44.655437 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-17 00:33:44.655453 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-17 00:33:44.655850 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-17 00:33:44.656286 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-17 00:33:44.656343 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-17 00:33:44.656566 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-17 00:33:44.660404 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-17 00:33:44.660461 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-17 00:33:44.660477 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-17 00:33:44.660489 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-17 00:33:44.660500 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-17 00:33:44.660511 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-17 00:33:44.660521 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-17 00:33:44.660532 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-17 00:33:44.660543 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-17 00:33:44.660631 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-17 00:33:44.660647 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-17 00:33:44.660904 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-17 00:33:44.689820 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:33:44.690257 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-17 00:33:44.690957 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-17 00:33:44.690981 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-17 00:33:44.691797 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-17 00:33:44.691832 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-17 00:33:44.691844 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-17 00:33:44.692327 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-17 00:33:44.692509 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-17 00:33:44.693220 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-17 00:33:44.693259 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-17 00:33:44.693341 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-17 00:33:44.725399 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:33:44.725481 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-17 00:33:44.726362 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-17 00:33:44.726385 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-17 00:33:44.726398 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-17 00:33:44.763811 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-17 00:33:44.763951 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:33:44.769019 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-17 00:33:44.769048 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-17 00:33:49.083898 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:33:49.085583 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-17 00:33:49.086987 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-17 00:33:49.090248 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-17 00:33:49.090302 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-17 00:33:49.090317 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-17 00:33:49.090330 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-17 00:33:49.090806 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-17 00:33:49.091638 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-17 00:33:49.092432 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-17 00:33:49.092977 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-17 00:33:49.093675 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-17 00:33:49.094296 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-17 00:33:49.095045 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-17 00:33:49.095552 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-17 00:33:49.095647 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-17 00:33:49.095990 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-17 00:33:49.096305 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-17 00:33:49.096576 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-17 00:33:49.096973 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-17 00:33:49.097349 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-17 00:33:49.097612 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-17 00:33:49.098116 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-17 00:33:49.098359 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-17 00:33:49.098863 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-17 00:33:49.099286 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-17 00:33:49.099619 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-17 00:33:49.099947 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-17 00:33:49.100247 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-17 00:33:49.100547 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-17 00:33:49.100942 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-17 00:33:49.101177 | orchestrator | 2025-05-17 00:33:49.101410 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-17 00:33:49.101693 | orchestrator | Saturday 17 May 2025 00:33:49 +0000 (0:00:04.526) 0:03:23.540 ********** 2025-05-17 00:33:50.583968 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-17 00:33:50.585102 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-17 00:33:50.585897 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-17 00:33:50.588709 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-17 00:33:50.589720 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-17 00:33:50.590811 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-17 00:33:50.591282 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-17 00:33:50.592690 | orchestrator | 2025-05-17 00:33:50.592857 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-17 00:33:50.593358 | orchestrator | Saturday 17 May 2025 00:33:50 +0000 (0:00:01.501) 0:03:25.041 ********** 2025-05-17 00:33:50.615005 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-17 00:33:50.643138 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:33:50.740996 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-17 00:33:50.741171 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-17 00:33:51.076618 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:33:51.076924 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:33:51.078120 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-17 00:33:51.078322 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:33:51.079196 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-17 00:33:51.079631 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-17 00:33:51.080232 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-17 00:33:51.080958 | orchestrator | 2025-05-17 00:33:51.081232 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-17 00:33:51.081669 | orchestrator | Saturday 17 May 2025 00:33:51 +0000 (0:00:00.494) 0:03:25.536 ********** 2025-05-17 00:33:51.151369 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-17 00:33:51.176165 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:33:51.254674 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-17 00:33:51.709399 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:33:51.709496 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-17 00:33:51.710219 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:33:51.710903 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-17 00:33:51.710967 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:33:51.711923 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-17 00:33:51.713022 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-17 00:33:51.714490 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-17 00:33:51.715408 | orchestrator | 2025-05-17 00:33:51.716170 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-17 00:33:51.717178 | orchestrator | Saturday 17 May 2025 00:33:51 +0000 (0:00:00.632) 0:03:26.168 ********** 2025-05-17 00:33:51.789493 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:33:51.810408 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:33:51.832807 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:33:51.854483 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:33:51.977331 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:33:51.977493 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:33:51.978361 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:33:51.978940 | orchestrator | 2025-05-17 00:33:51.982231 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-17 00:33:51.982257 | orchestrator | Saturday 17 May 2025 00:33:51 +0000 (0:00:00.268) 0:03:26.437 ********** 2025-05-17 00:33:57.743005 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:33:57.743465 | orchestrator | ok: [testbed-manager] 2025-05-17 00:33:57.744111 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:33:57.745725 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:33:57.745845 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:33:57.747117 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:33:57.747142 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:33:57.747957 | orchestrator | 2025-05-17 00:33:57.749460 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-17 00:33:57.749884 | orchestrator | Saturday 17 May 2025 00:33:57 +0000 (0:00:05.765) 0:03:32.202 ********** 2025-05-17 00:33:57.816254 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-17 00:33:57.817831 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-17 00:33:57.848691 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:33:57.886827 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:33:57.890356 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-17 00:33:57.932950 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:33:57.933171 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-17 00:33:57.933880 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-17 00:33:57.979869 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:33:57.981110 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-17 00:33:58.048948 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:33:58.049545 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:33:58.050307 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-17 00:33:58.051323 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:33:58.054207 | orchestrator | 2025-05-17 00:33:58.054656 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-17 00:33:58.055776 | orchestrator | Saturday 17 May 2025 00:33:58 +0000 (0:00:00.306) 0:03:32.508 ********** 2025-05-17 00:33:59.041338 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-17 00:33:59.042078 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-17 00:33:59.043038 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-17 00:33:59.043418 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-17 00:33:59.044297 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-17 00:33:59.045361 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-17 00:33:59.045890 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-17 00:33:59.046103 | orchestrator | 2025-05-17 00:33:59.046676 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-17 00:33:59.047360 | orchestrator | Saturday 17 May 2025 00:33:59 +0000 (0:00:00.990) 0:03:33.499 ********** 2025-05-17 00:33:59.437013 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:33:59.441363 | orchestrator | 2025-05-17 00:33:59.441398 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-17 00:33:59.441412 | orchestrator | Saturday 17 May 2025 00:33:59 +0000 (0:00:00.395) 0:03:33.894 ********** 2025-05-17 00:34:00.706480 | orchestrator | ok: [testbed-manager] 2025-05-17 00:34:00.706695 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:34:00.706717 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:34:00.708639 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:34:00.708683 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:34:00.708695 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:34:00.708977 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:34:00.709488 | orchestrator | 2025-05-17 00:34:00.710105 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-17 00:34:00.710481 | orchestrator | Saturday 17 May 2025 00:34:00 +0000 (0:00:01.268) 0:03:35.163 ********** 2025-05-17 00:34:01.264880 | orchestrator | ok: [testbed-manager] 2025-05-17 00:34:01.265356 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:34:01.267553 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:34:01.267722 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:34:01.271506 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:34:01.273340 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:34:01.273991 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:34:01.274384 | orchestrator | 2025-05-17 00:34:01.274816 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-17 00:34:01.275427 | orchestrator | Saturday 17 May 2025 00:34:01 +0000 (0:00:00.561) 0:03:35.724 ********** 2025-05-17 00:34:01.896451 | orchestrator | changed: [testbed-manager] 2025-05-17 00:34:01.896551 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:34:01.897281 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:34:01.899081 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:34:01.899635 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:34:01.900099 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:34:01.900732 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:34:01.901109 | orchestrator | 2025-05-17 00:34:01.901801 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-17 00:34:01.902401 | orchestrator | Saturday 17 May 2025 00:34:01 +0000 (0:00:00.624) 0:03:36.349 ********** 2025-05-17 00:34:02.426418 | orchestrator | ok: [testbed-manager] 2025-05-17 00:34:02.426582 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:34:02.426922 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:34:02.428312 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:34:02.428699 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:34:02.428720 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:34:02.430269 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:34:02.431851 | orchestrator | 2025-05-17 00:34:02.432166 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-17 00:34:02.432189 | orchestrator | Saturday 17 May 2025 00:34:02 +0000 (0:00:00.536) 0:03:36.886 ********** 2025-05-17 00:34:03.338322 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747440233.016861, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 00:34:03.338974 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747440267.8516252, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 00:34:03.340021 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747440265.8495884, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 00:34:03.341266 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747440256.8613644, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 00:34:03.342146 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747440260.016872, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 00:34:03.342982 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747440269.2610016, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 00:34:03.344290 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747440267.3576095, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 00:34:03.347507 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747440262.7749221, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 00:34:03.348976 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747440183.3089476, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 00:34:03.349842 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747440179.0316262, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 00:34:03.350446 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747440189.570843, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 00:34:03.351486 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747440185.4817078, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 00:34:03.352376 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747440187.9885826, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 00:34:03.352762 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747440188.6394188, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 00:34:03.353319 | orchestrator | 2025-05-17 00:34:03.353709 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-17 00:34:03.354248 | orchestrator | Saturday 17 May 2025 00:34:03 +0000 (0:00:00.911) 0:03:37.798 ********** 2025-05-17 00:34:04.460192 | orchestrator | changed: [testbed-manager] 2025-05-17 00:34:04.461298 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:34:04.461488 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:34:04.466529 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:34:04.467268 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:34:04.469590 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:34:04.472194 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:34:04.472220 | orchestrator | 2025-05-17 00:34:04.472234 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-17 00:34:04.472248 | orchestrator | Saturday 17 May 2025 00:34:04 +0000 (0:00:01.120) 0:03:38.918 ********** 2025-05-17 00:34:05.512652 | orchestrator | changed: [testbed-manager] 2025-05-17 00:34:05.516160 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:34:05.516228 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:34:05.516980 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:34:05.517125 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:34:05.520111 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:34:05.523508 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:34:05.523927 | orchestrator | 2025-05-17 00:34:05.524786 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-17 00:34:05.525699 | orchestrator | Saturday 17 May 2025 00:34:05 +0000 (0:00:01.052) 0:03:39.971 ********** 2025-05-17 00:34:05.575082 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:34:05.604516 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:34:05.698854 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:34:05.730161 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:34:05.794595 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:34:05.795315 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:34:05.796201 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:34:05.796537 | orchestrator | 2025-05-17 00:34:05.797640 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-17 00:34:05.798354 | orchestrator | Saturday 17 May 2025 00:34:05 +0000 (0:00:00.282) 0:03:40.254 ********** 2025-05-17 00:34:06.510723 | orchestrator | ok: [testbed-manager] 2025-05-17 00:34:06.512421 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:34:06.514303 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:34:06.514329 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:34:06.514341 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:34:06.514353 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:34:06.515046 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:34:06.516181 | orchestrator | 2025-05-17 00:34:06.517177 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-17 00:34:06.518126 | orchestrator | Saturday 17 May 2025 00:34:06 +0000 (0:00:00.713) 0:03:40.968 ********** 2025-05-17 00:34:06.923944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:34:06.924119 | orchestrator | 2025-05-17 00:34:06.925147 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-17 00:34:06.928814 | orchestrator | Saturday 17 May 2025 00:34:06 +0000 (0:00:00.415) 0:03:41.383 ********** 2025-05-17 00:34:14.447848 | orchestrator | ok: [testbed-manager] 2025-05-17 00:34:14.448190 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:34:14.449572 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:34:14.450161 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:34:14.450894 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:34:14.451769 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:34:14.452654 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:34:14.453231 | orchestrator | 2025-05-17 00:34:14.454308 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-17 00:34:14.455460 | orchestrator | Saturday 17 May 2025 00:34:14 +0000 (0:00:07.521) 0:03:48.905 ********** 2025-05-17 00:34:15.564899 | orchestrator | ok: [testbed-manager] 2025-05-17 00:34:15.566506 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:34:15.566597 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:34:15.567669 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:34:15.568811 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:34:15.569958 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:34:15.571163 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:34:15.571948 | orchestrator | 2025-05-17 00:34:15.572976 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-17 00:34:15.573675 | orchestrator | Saturday 17 May 2025 00:34:15 +0000 (0:00:01.118) 0:03:50.024 ********** 2025-05-17 00:34:16.566853 | orchestrator | ok: [testbed-manager] 2025-05-17 00:34:16.566965 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:34:16.570247 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:34:16.570302 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:34:16.570315 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:34:16.570327 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:34:16.570339 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:34:16.570950 | orchestrator | 2025-05-17 00:34:16.571324 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-17 00:34:16.572034 | orchestrator | Saturday 17 May 2025 00:34:16 +0000 (0:00:01.000) 0:03:51.024 ********** 2025-05-17 00:34:16.925533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:34:16.925635 | orchestrator | 2025-05-17 00:34:16.926445 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-17 00:34:16.926996 | orchestrator | Saturday 17 May 2025 00:34:16 +0000 (0:00:00.360) 0:03:51.385 ********** 2025-05-17 00:34:24.914889 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:34:24.916338 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:34:24.916756 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:34:24.919897 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:34:24.921457 | orchestrator | changed: [testbed-manager] 2025-05-17 00:34:24.922337 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:34:24.923337 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:34:24.924223 | orchestrator | 2025-05-17 00:34:24.924907 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-17 00:34:24.925809 | orchestrator | Saturday 17 May 2025 00:34:24 +0000 (0:00:07.987) 0:03:59.372 ********** 2025-05-17 00:34:25.639314 | orchestrator | changed: [testbed-manager] 2025-05-17 00:34:25.639963 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:34:25.641174 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:34:25.642145 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:34:25.643070 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:34:25.643860 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:34:25.644523 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:34:25.645413 | orchestrator | 2025-05-17 00:34:25.645997 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-17 00:34:25.647138 | orchestrator | Saturday 17 May 2025 00:34:25 +0000 (0:00:00.725) 0:04:00.098 ********** 2025-05-17 00:34:26.733710 | orchestrator | changed: [testbed-manager] 2025-05-17 00:34:26.734698 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:34:26.736764 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:34:26.736799 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:34:26.737432 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:34:26.738364 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:34:26.739261 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:34:26.739881 | orchestrator | 2025-05-17 00:34:26.740586 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-17 00:34:26.741590 | orchestrator | Saturday 17 May 2025 00:34:26 +0000 (0:00:01.092) 0:04:01.190 ********** 2025-05-17 00:34:27.752409 | orchestrator | changed: [testbed-manager] 2025-05-17 00:34:27.752572 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:34:27.753440 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:34:27.754174 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:34:27.757172 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:34:27.757197 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:34:27.757208 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:34:27.757220 | orchestrator | 2025-05-17 00:34:27.757233 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-17 00:34:27.757245 | orchestrator | Saturday 17 May 2025 00:34:27 +0000 (0:00:01.021) 0:04:02.212 ********** 2025-05-17 00:34:27.854694 | orchestrator | ok: [testbed-manager] 2025-05-17 00:34:27.894607 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:34:27.924633 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:34:27.967555 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:34:28.027793 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:34:28.028484 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:34:28.029059 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:34:28.029972 | orchestrator | 2025-05-17 00:34:28.031587 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-17 00:34:28.032306 | orchestrator | Saturday 17 May 2025 00:34:28 +0000 (0:00:00.275) 0:04:02.487 ********** 2025-05-17 00:34:28.138365 | orchestrator | ok: [testbed-manager] 2025-05-17 00:34:28.177243 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:34:28.208577 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:34:28.245497 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:34:28.338079 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:34:28.338718 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:34:28.339519 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:34:28.340356 | orchestrator | 2025-05-17 00:34:28.341188 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-17 00:34:28.342181 | orchestrator | Saturday 17 May 2025 00:34:28 +0000 (0:00:00.309) 0:04:02.797 ********** 2025-05-17 00:34:28.445429 | orchestrator | ok: [testbed-manager] 2025-05-17 00:34:28.475190 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:34:28.508201 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:34:28.543007 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:34:28.628942 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:34:28.629405 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:34:28.630317 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:34:28.631814 | orchestrator | 2025-05-17 00:34:28.633378 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-17 00:34:28.634208 | orchestrator | Saturday 17 May 2025 00:34:28 +0000 (0:00:00.290) 0:04:03.087 ********** 2025-05-17 00:34:34.471988 | orchestrator | ok: [testbed-manager] 2025-05-17 00:34:34.472139 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:34:34.472226 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:34:34.472450 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:34:34.473929 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:34:34.473970 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:34:34.474600 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:34:34.475302 | orchestrator | 2025-05-17 00:34:34.475661 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-17 00:34:34.476253 | orchestrator | Saturday 17 May 2025 00:34:34 +0000 (0:00:05.841) 0:04:08.929 ********** 2025-05-17 00:34:34.849070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:34:34.849756 | orchestrator | 2025-05-17 00:34:34.850612 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-17 00:34:34.851252 | orchestrator | Saturday 17 May 2025 00:34:34 +0000 (0:00:00.377) 0:04:09.306 ********** 2025-05-17 00:34:34.927603 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-17 00:34:34.927676 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-17 00:34:34.928968 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-17 00:34:34.966114 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-17 00:34:34.967209 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:34:35.025288 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-17 00:34:35.026212 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:34:35.026295 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-17 00:34:35.027438 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-17 00:34:35.028084 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-17 00:34:35.056270 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:34:35.102975 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-17 00:34:35.104156 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:34:35.105561 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-17 00:34:35.109283 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-17 00:34:35.109307 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-17 00:34:35.172633 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:34:35.174181 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:34:35.174685 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-17 00:34:35.175889 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-17 00:34:35.175911 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:34:35.176227 | orchestrator | 2025-05-17 00:34:35.176862 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-17 00:34:35.177485 | orchestrator | Saturday 17 May 2025 00:34:35 +0000 (0:00:00.324) 0:04:09.631 ********** 2025-05-17 00:34:35.578682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:34:35.579614 | orchestrator | 2025-05-17 00:34:35.581308 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-17 00:34:35.582190 | orchestrator | Saturday 17 May 2025 00:34:35 +0000 (0:00:00.407) 0:04:10.038 ********** 2025-05-17 00:34:35.675636 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-17 00:34:35.676505 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-17 00:34:35.712545 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:34:35.714656 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-17 00:34:35.749800 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:34:35.749830 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-17 00:34:35.784682 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:34:35.836095 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:34:35.836822 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-17 00:34:35.903177 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-17 00:34:35.903894 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:34:35.907267 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:34:35.907334 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-17 00:34:35.908308 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:34:35.909797 | orchestrator | 2025-05-17 00:34:35.910364 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-17 00:34:35.911063 | orchestrator | Saturday 17 May 2025 00:34:35 +0000 (0:00:00.324) 0:04:10.362 ********** 2025-05-17 00:34:36.303223 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:34:36.305560 | orchestrator | 2025-05-17 00:34:36.305591 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-17 00:34:36.305605 | orchestrator | Saturday 17 May 2025 00:34:36 +0000 (0:00:00.399) 0:04:10.761 ********** 2025-05-17 00:35:10.008128 | orchestrator | changed: [testbed-manager] 2025-05-17 00:35:10.008253 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:35:10.008269 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:35:10.008281 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:35:10.008644 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:35:10.009420 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:35:10.010225 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:35:10.010801 | orchestrator | 2025-05-17 00:35:10.011557 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-17 00:35:10.012177 | orchestrator | Saturday 17 May 2025 00:35:09 +0000 (0:00:33.700) 0:04:44.461 ********** 2025-05-17 00:35:17.933017 | orchestrator | changed: [testbed-manager] 2025-05-17 00:35:17.933297 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:35:17.934161 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:35:17.936114 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:35:17.936807 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:35:17.937989 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:35:17.938826 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:35:17.939330 | orchestrator | 2025-05-17 00:35:17.941328 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-17 00:35:17.942093 | orchestrator | Saturday 17 May 2025 00:35:17 +0000 (0:00:07.928) 0:04:52.390 ********** 2025-05-17 00:35:25.241507 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:35:25.241698 | orchestrator | changed: [testbed-manager] 2025-05-17 00:35:25.242155 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:35:25.243058 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:35:25.244158 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:35:25.246832 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:35:25.246856 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:35:25.247367 | orchestrator | 2025-05-17 00:35:25.248673 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-17 00:35:25.248798 | orchestrator | Saturday 17 May 2025 00:35:25 +0000 (0:00:07.307) 0:04:59.698 ********** 2025-05-17 00:35:26.864243 | orchestrator | ok: [testbed-manager] 2025-05-17 00:35:26.864347 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:35:26.864362 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:35:26.864441 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:35:26.864457 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:35:26.864838 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:35:26.866140 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:35:26.866270 | orchestrator | 2025-05-17 00:35:26.866288 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-17 00:35:26.866607 | orchestrator | Saturday 17 May 2025 00:35:26 +0000 (0:00:01.625) 0:05:01.323 ********** 2025-05-17 00:35:32.140838 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:35:32.141338 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:35:32.141819 | orchestrator | changed: [testbed-manager] 2025-05-17 00:35:32.142541 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:35:32.143428 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:35:32.144152 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:35:32.145355 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:35:32.147614 | orchestrator | 2025-05-17 00:35:32.148321 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-17 00:35:32.149241 | orchestrator | Saturday 17 May 2025 00:35:32 +0000 (0:00:05.275) 0:05:06.599 ********** 2025-05-17 00:35:32.538285 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:35:32.538386 | orchestrator | 2025-05-17 00:35:32.538835 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-17 00:35:32.540334 | orchestrator | Saturday 17 May 2025 00:35:32 +0000 (0:00:00.396) 0:05:06.995 ********** 2025-05-17 00:35:33.313927 | orchestrator | changed: [testbed-manager] 2025-05-17 00:35:33.316118 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:35:33.316517 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:35:33.317236 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:35:33.318227 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:35:33.319571 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:35:33.320364 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:35:33.321258 | orchestrator | 2025-05-17 00:35:33.322506 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-17 00:35:33.323098 | orchestrator | Saturday 17 May 2025 00:35:33 +0000 (0:00:00.775) 0:05:07.771 ********** 2025-05-17 00:35:34.893100 | orchestrator | ok: [testbed-manager] 2025-05-17 00:35:34.893314 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:35:34.893909 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:35:34.895891 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:35:34.897102 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:35:34.897631 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:35:34.898923 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:35:34.899878 | orchestrator | 2025-05-17 00:35:34.900653 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-17 00:35:34.901469 | orchestrator | Saturday 17 May 2025 00:35:34 +0000 (0:00:01.580) 0:05:09.352 ********** 2025-05-17 00:35:35.646978 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:35:35.647267 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:35:35.649574 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:35:35.650524 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:35:35.652574 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:35:35.652599 | orchestrator | changed: [testbed-manager] 2025-05-17 00:35:35.653204 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:35:35.653873 | orchestrator | 2025-05-17 00:35:35.654489 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-17 00:35:35.655137 | orchestrator | Saturday 17 May 2025 00:35:35 +0000 (0:00:00.754) 0:05:10.106 ********** 2025-05-17 00:35:35.707169 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:35:35.751096 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:35:35.783760 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:35:35.813047 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:35:35.856156 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:35:35.918226 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:35:35.918569 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:35:35.919314 | orchestrator | 2025-05-17 00:35:35.920040 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-17 00:35:35.920608 | orchestrator | Saturday 17 May 2025 00:35:35 +0000 (0:00:00.270) 0:05:10.377 ********** 2025-05-17 00:35:35.988920 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:35:36.021113 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:35:36.052451 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:35:36.084464 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:35:36.115533 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:35:36.312061 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:35:36.312350 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:35:36.313049 | orchestrator | 2025-05-17 00:35:36.314097 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-17 00:35:36.316327 | orchestrator | Saturday 17 May 2025 00:35:36 +0000 (0:00:00.393) 0:05:10.770 ********** 2025-05-17 00:35:36.390296 | orchestrator | ok: [testbed-manager] 2025-05-17 00:35:36.430381 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:35:36.465146 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:35:36.541949 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:35:36.619072 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:35:36.619311 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:35:36.619748 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:35:36.623336 | orchestrator | 2025-05-17 00:35:36.623655 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-17 00:35:36.624662 | orchestrator | Saturday 17 May 2025 00:35:36 +0000 (0:00:00.305) 0:05:11.075 ********** 2025-05-17 00:35:36.710292 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:35:36.738608 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:35:36.771897 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:35:36.804432 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:35:36.840492 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:35:36.895663 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:35:36.896440 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:35:36.899313 | orchestrator | 2025-05-17 00:35:36.899340 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-17 00:35:36.899380 | orchestrator | Saturday 17 May 2025 00:35:36 +0000 (0:00:00.279) 0:05:11.355 ********** 2025-05-17 00:35:36.989974 | orchestrator | ok: [testbed-manager] 2025-05-17 00:35:37.042394 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:35:37.079485 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:35:37.118412 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:35:37.188491 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:35:37.189629 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:35:37.190850 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:35:37.192698 | orchestrator | 2025-05-17 00:35:37.194201 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-17 00:35:37.195191 | orchestrator | Saturday 17 May 2025 00:35:37 +0000 (0:00:00.291) 0:05:11.647 ********** 2025-05-17 00:35:37.255210 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:35:37.285029 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:35:37.347973 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:35:37.379186 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:35:37.435348 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:35:37.436055 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:35:37.438193 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:35:37.438303 | orchestrator | 2025-05-17 00:35:37.438320 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-17 00:35:37.438333 | orchestrator | Saturday 17 May 2025 00:35:37 +0000 (0:00:00.248) 0:05:11.896 ********** 2025-05-17 00:35:37.512173 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:35:37.542109 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:35:37.571575 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:35:37.601609 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:35:37.640488 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:35:37.693923 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:35:37.694402 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:35:37.695160 | orchestrator | 2025-05-17 00:35:37.696264 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-17 00:35:37.696860 | orchestrator | Saturday 17 May 2025 00:35:37 +0000 (0:00:00.256) 0:05:12.153 ********** 2025-05-17 00:35:38.158291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:35:38.158391 | orchestrator | 2025-05-17 00:35:38.159543 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-17 00:35:38.160192 | orchestrator | Saturday 17 May 2025 00:35:38 +0000 (0:00:00.464) 0:05:12.617 ********** 2025-05-17 00:35:39.014939 | orchestrator | ok: [testbed-manager] 2025-05-17 00:35:39.016149 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:35:39.017062 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:35:39.017879 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:35:39.018515 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:35:39.019097 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:35:39.019642 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:35:39.020298 | orchestrator | 2025-05-17 00:35:39.021019 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-17 00:35:39.021470 | orchestrator | Saturday 17 May 2025 00:35:39 +0000 (0:00:00.854) 0:05:13.472 ********** 2025-05-17 00:35:41.709232 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:35:41.709599 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:35:41.710657 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:35:41.711883 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:35:41.712991 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:35:41.713862 | orchestrator | ok: [testbed-manager] 2025-05-17 00:35:41.715023 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:35:41.715366 | orchestrator | 2025-05-17 00:35:41.715943 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-17 00:35:41.716843 | orchestrator | Saturday 17 May 2025 00:35:41 +0000 (0:00:02.695) 0:05:16.168 ********** 2025-05-17 00:35:41.785549 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-17 00:35:41.785611 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-17 00:35:41.790521 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-17 00:35:41.857992 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-17 00:35:41.858143 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-17 00:35:41.943822 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-17 00:35:41.943927 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:35:41.943943 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-17 00:35:41.944052 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-17 00:35:41.944842 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-17 00:35:42.019772 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:35:42.019974 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-17 00:35:42.020329 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-17 00:35:42.021019 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-17 00:35:42.089282 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:35:42.089514 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-17 00:35:42.089763 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-17 00:35:42.089990 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-17 00:35:42.170205 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:35:42.170376 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-17 00:35:42.170672 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-17 00:35:42.171216 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-17 00:35:42.313551 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:35:42.314662 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:35:42.315996 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-17 00:35:42.317423 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-17 00:35:42.318612 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-17 00:35:42.319288 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:35:42.320128 | orchestrator | 2025-05-17 00:35:42.320917 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-17 00:35:42.321520 | orchestrator | Saturday 17 May 2025 00:35:42 +0000 (0:00:00.605) 0:05:16.773 ********** 2025-05-17 00:35:48.322396 | orchestrator | ok: [testbed-manager] 2025-05-17 00:35:48.322492 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:35:48.324411 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:35:48.326212 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:35:48.326424 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:35:48.329017 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:35:48.329043 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:35:48.329056 | orchestrator | 2025-05-17 00:35:48.329069 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-17 00:35:48.329352 | orchestrator | Saturday 17 May 2025 00:35:48 +0000 (0:00:06.005) 0:05:22.778 ********** 2025-05-17 00:35:49.306808 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:35:49.307136 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:35:49.307780 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:35:49.308194 | orchestrator | ok: [testbed-manager] 2025-05-17 00:35:49.309481 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:35:49.310224 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:35:49.310801 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:35:49.311451 | orchestrator | 2025-05-17 00:35:49.312016 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-17 00:35:49.312143 | orchestrator | Saturday 17 May 2025 00:35:49 +0000 (0:00:00.986) 0:05:23.765 ********** 2025-05-17 00:35:56.186344 | orchestrator | ok: [testbed-manager] 2025-05-17 00:35:56.186494 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:35:56.186587 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:35:56.186606 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:35:56.187559 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:35:56.187794 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:35:56.189253 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:35:56.189924 | orchestrator | 2025-05-17 00:35:56.190350 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-17 00:35:56.191200 | orchestrator | Saturday 17 May 2025 00:35:56 +0000 (0:00:06.878) 0:05:30.644 ********** 2025-05-17 00:35:59.546291 | orchestrator | changed: [testbed-manager] 2025-05-17 00:35:59.547036 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:35:59.550008 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:35:59.550077 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:35:59.550346 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:35:59.551046 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:35:59.552883 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:35:59.553211 | orchestrator | 2025-05-17 00:35:59.553823 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-17 00:35:59.554774 | orchestrator | Saturday 17 May 2025 00:35:59 +0000 (0:00:03.361) 0:05:34.005 ********** 2025-05-17 00:36:00.794747 | orchestrator | ok: [testbed-manager] 2025-05-17 00:36:00.794915 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:36:00.798456 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:36:00.798512 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:36:00.798524 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:36:00.798536 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:36:00.800015 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:36:00.800107 | orchestrator | 2025-05-17 00:36:00.800421 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-17 00:36:00.800928 | orchestrator | Saturday 17 May 2025 00:36:00 +0000 (0:00:01.245) 0:05:35.251 ********** 2025-05-17 00:36:02.257680 | orchestrator | ok: [testbed-manager] 2025-05-17 00:36:02.258173 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:36:02.261519 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:36:02.261554 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:36:02.261566 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:36:02.261577 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:36:02.262128 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:36:02.262918 | orchestrator | 2025-05-17 00:36:02.263295 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-17 00:36:02.264147 | orchestrator | Saturday 17 May 2025 00:36:02 +0000 (0:00:01.463) 0:05:36.714 ********** 2025-05-17 00:36:02.453173 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:36:02.532061 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:36:02.593902 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:36:02.655130 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:36:02.845348 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:36:02.846213 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:36:02.847136 | orchestrator | changed: [testbed-manager] 2025-05-17 00:36:02.848096 | orchestrator | 2025-05-17 00:36:02.849261 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-17 00:36:02.849936 | orchestrator | Saturday 17 May 2025 00:36:02 +0000 (0:00:00.590) 0:05:37.304 ********** 2025-05-17 00:36:12.364135 | orchestrator | ok: [testbed-manager] 2025-05-17 00:36:12.364956 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:36:12.366434 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:36:12.367854 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:36:12.368977 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:36:12.369306 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:36:12.370138 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:36:12.371060 | orchestrator | 2025-05-17 00:36:12.372150 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-17 00:36:12.373006 | orchestrator | Saturday 17 May 2025 00:36:12 +0000 (0:00:09.515) 0:05:46.820 ********** 2025-05-17 00:36:13.251562 | orchestrator | changed: [testbed-manager] 2025-05-17 00:36:13.251764 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:36:13.252567 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:36:13.253036 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:36:13.254454 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:36:13.254495 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:36:13.254507 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:36:13.254518 | orchestrator | 2025-05-17 00:36:13.255072 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-17 00:36:13.255521 | orchestrator | Saturday 17 May 2025 00:36:13 +0000 (0:00:00.887) 0:05:47.707 ********** 2025-05-17 00:36:25.664540 | orchestrator | ok: [testbed-manager] 2025-05-17 00:36:25.664663 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:36:25.664679 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:36:25.664779 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:36:25.664792 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:36:25.664803 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:36:25.664883 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:36:25.665364 | orchestrator | 2025-05-17 00:36:25.666174 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-17 00:36:25.666457 | orchestrator | Saturday 17 May 2025 00:36:25 +0000 (0:00:12.406) 0:06:00.114 ********** 2025-05-17 00:36:37.809919 | orchestrator | ok: [testbed-manager] 2025-05-17 00:36:37.810097 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:36:37.810117 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:36:37.810129 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:36:37.810171 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:36:37.810182 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:36:37.810659 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:36:37.811763 | orchestrator | 2025-05-17 00:36:37.812352 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-17 00:36:37.813260 | orchestrator | Saturday 17 May 2025 00:36:37 +0000 (0:00:12.146) 0:06:12.260 ********** 2025-05-17 00:36:38.150531 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-17 00:36:38.954776 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-17 00:36:38.955242 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-17 00:36:38.956628 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-17 00:36:38.957301 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-17 00:36:38.958512 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-17 00:36:38.958903 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-17 00:36:38.959608 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-17 00:36:38.960462 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-17 00:36:38.961059 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-17 00:36:38.961895 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-17 00:36:38.962230 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-17 00:36:38.963024 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-17 00:36:38.963645 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-17 00:36:38.964147 | orchestrator | 2025-05-17 00:36:38.964551 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-17 00:36:38.965085 | orchestrator | Saturday 17 May 2025 00:36:38 +0000 (0:00:01.148) 0:06:13.409 ********** 2025-05-17 00:36:39.079596 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:36:39.138518 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:36:39.203968 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:36:39.264974 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:36:39.332834 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:36:39.464193 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:36:39.465295 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:36:39.466240 | orchestrator | 2025-05-17 00:36:39.467061 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-17 00:36:39.467866 | orchestrator | Saturday 17 May 2025 00:36:39 +0000 (0:00:00.515) 0:06:13.924 ********** 2025-05-17 00:36:43.016666 | orchestrator | ok: [testbed-manager] 2025-05-17 00:36:43.018383 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:36:43.020544 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:36:43.021299 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:36:43.022228 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:36:43.022884 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:36:43.023455 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:36:43.024136 | orchestrator | 2025-05-17 00:36:43.025231 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-17 00:36:43.025782 | orchestrator | Saturday 17 May 2025 00:36:43 +0000 (0:00:03.548) 0:06:17.472 ********** 2025-05-17 00:36:43.146399 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:36:43.205215 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:36:43.266860 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:36:43.484143 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:36:43.544031 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:36:43.640297 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:36:43.641176 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:36:43.641423 | orchestrator | 2025-05-17 00:36:43.642186 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-17 00:36:43.642818 | orchestrator | Saturday 17 May 2025 00:36:43 +0000 (0:00:00.626) 0:06:18.099 ********** 2025-05-17 00:36:43.714472 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-17 00:36:43.714861 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-17 00:36:43.782970 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:36:43.783619 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-17 00:36:43.784340 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-17 00:36:43.847722 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:36:43.848300 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-17 00:36:43.849375 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-17 00:36:43.921591 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:36:43.922132 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-17 00:36:43.922776 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-17 00:36:43.989570 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:36:43.990002 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-17 00:36:43.995124 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-17 00:36:44.055990 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:36:44.056360 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-17 00:36:44.057276 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-17 00:36:44.165586 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:36:44.165760 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-17 00:36:44.166373 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-17 00:36:44.167327 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:36:44.170124 | orchestrator | 2025-05-17 00:36:44.170151 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-17 00:36:44.170165 | orchestrator | Saturday 17 May 2025 00:36:44 +0000 (0:00:00.523) 0:06:18.623 ********** 2025-05-17 00:36:44.300227 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:36:44.370882 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:36:44.431793 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:36:44.491163 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:36:44.556604 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:36:44.641993 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:36:44.642249 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:36:44.642549 | orchestrator | 2025-05-17 00:36:44.643105 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-17 00:36:44.643819 | orchestrator | Saturday 17 May 2025 00:36:44 +0000 (0:00:00.477) 0:06:19.100 ********** 2025-05-17 00:36:44.766713 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:36:44.828664 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:36:44.891349 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:36:44.956754 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:36:45.017775 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:36:45.110708 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:36:45.113265 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:36:45.114969 | orchestrator | 2025-05-17 00:36:45.116462 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-17 00:36:45.117798 | orchestrator | Saturday 17 May 2025 00:36:45 +0000 (0:00:00.468) 0:06:19.568 ********** 2025-05-17 00:36:45.240810 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:36:45.302306 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:36:45.368196 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:36:45.454415 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:36:45.537553 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:36:45.664889 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:36:45.665972 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:36:45.667036 | orchestrator | 2025-05-17 00:36:45.668210 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-17 00:36:45.668894 | orchestrator | Saturday 17 May 2025 00:36:45 +0000 (0:00:00.555) 0:06:20.124 ********** 2025-05-17 00:36:51.535891 | orchestrator | ok: [testbed-manager] 2025-05-17 00:36:51.536028 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:36:51.536042 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:36:51.536235 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:36:51.536434 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:36:51.536849 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:36:51.537300 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:36:51.538415 | orchestrator | 2025-05-17 00:36:51.538785 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-17 00:36:51.539117 | orchestrator | Saturday 17 May 2025 00:36:51 +0000 (0:00:05.868) 0:06:25.993 ********** 2025-05-17 00:36:52.359447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:36:52.359652 | orchestrator | 2025-05-17 00:36:52.361192 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-17 00:36:52.361709 | orchestrator | Saturday 17 May 2025 00:36:52 +0000 (0:00:00.824) 0:06:26.817 ********** 2025-05-17 00:36:53.146079 | orchestrator | ok: [testbed-manager] 2025-05-17 00:36:53.146292 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:36:53.146941 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:36:53.148363 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:36:53.148844 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:36:53.149460 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:36:53.150471 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:36:53.150821 | orchestrator | 2025-05-17 00:36:53.151658 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-17 00:36:53.152139 | orchestrator | Saturday 17 May 2025 00:36:53 +0000 (0:00:00.785) 0:06:27.602 ********** 2025-05-17 00:36:53.959103 | orchestrator | ok: [testbed-manager] 2025-05-17 00:36:53.961957 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:36:53.962895 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:36:53.963755 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:36:53.964421 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:36:53.965035 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:36:53.965633 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:36:53.966254 | orchestrator | 2025-05-17 00:36:53.966953 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-17 00:36:53.967661 | orchestrator | Saturday 17 May 2025 00:36:53 +0000 (0:00:00.814) 0:06:28.416 ********** 2025-05-17 00:36:55.427283 | orchestrator | ok: [testbed-manager] 2025-05-17 00:36:55.427483 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:36:55.427572 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:36:55.428800 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:36:55.429698 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:36:55.430267 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:36:55.433010 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:36:55.433043 | orchestrator | 2025-05-17 00:36:55.433063 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-17 00:36:55.433084 | orchestrator | Saturday 17 May 2025 00:36:55 +0000 (0:00:01.470) 0:06:29.886 ********** 2025-05-17 00:36:55.549631 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:36:56.729653 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:36:56.729921 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:36:56.730370 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:36:56.731074 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:36:56.732273 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:36:56.732833 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:36:56.733229 | orchestrator | 2025-05-17 00:36:56.733828 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-17 00:36:56.736102 | orchestrator | Saturday 17 May 2025 00:36:56 +0000 (0:00:01.300) 0:06:31.186 ********** 2025-05-17 00:36:57.978298 | orchestrator | ok: [testbed-manager] 2025-05-17 00:36:57.978829 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:36:57.980207 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:36:57.980554 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:36:57.981268 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:36:57.982246 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:36:57.982718 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:36:57.983397 | orchestrator | 2025-05-17 00:36:57.984070 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-17 00:36:57.984469 | orchestrator | Saturday 17 May 2025 00:36:57 +0000 (0:00:01.248) 0:06:32.434 ********** 2025-05-17 00:36:59.360184 | orchestrator | changed: [testbed-manager] 2025-05-17 00:36:59.361187 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:36:59.361238 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:36:59.362136 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:36:59.362885 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:36:59.363197 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:36:59.363700 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:36:59.364198 | orchestrator | 2025-05-17 00:36:59.364758 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-17 00:36:59.365308 | orchestrator | Saturday 17 May 2025 00:36:59 +0000 (0:00:01.384) 0:06:33.818 ********** 2025-05-17 00:37:00.354312 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:37:00.354422 | orchestrator | 2025-05-17 00:37:00.354438 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-17 00:37:00.355090 | orchestrator | Saturday 17 May 2025 00:37:00 +0000 (0:00:00.992) 0:06:34.811 ********** 2025-05-17 00:37:01.657042 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:01.658197 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:01.658489 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:01.659782 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:01.660229 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:01.660705 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:01.661509 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:01.662187 | orchestrator | 2025-05-17 00:37:01.663554 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-17 00:37:01.663577 | orchestrator | Saturday 17 May 2025 00:37:01 +0000 (0:00:01.303) 0:06:36.115 ********** 2025-05-17 00:37:02.751082 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:02.751265 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:02.752263 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:02.752950 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:02.756288 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:02.757421 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:02.758451 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:02.758971 | orchestrator | 2025-05-17 00:37:02.760398 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-17 00:37:02.760977 | orchestrator | Saturday 17 May 2025 00:37:02 +0000 (0:00:01.092) 0:06:37.207 ********** 2025-05-17 00:37:03.857951 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:03.858110 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:03.858845 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:03.859525 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:03.860425 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:03.862217 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:03.862838 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:03.863576 | orchestrator | 2025-05-17 00:37:03.864068 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-17 00:37:03.864394 | orchestrator | Saturday 17 May 2025 00:37:03 +0000 (0:00:01.107) 0:06:38.315 ********** 2025-05-17 00:37:05.181202 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:05.181377 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:05.182285 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:05.183639 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:05.183985 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:05.184925 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:05.185806 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:05.187071 | orchestrator | 2025-05-17 00:37:05.188295 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-17 00:37:05.189180 | orchestrator | Saturday 17 May 2025 00:37:05 +0000 (0:00:01.322) 0:06:39.637 ********** 2025-05-17 00:37:06.369864 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:37:06.370177 | orchestrator | 2025-05-17 00:37:06.373533 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-17 00:37:06.374796 | orchestrator | Saturday 17 May 2025 00:37:06 +0000 (0:00:00.902) 0:06:40.540 ********** 2025-05-17 00:37:06.374823 | orchestrator | 2025-05-17 00:37:06.375869 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-17 00:37:06.376610 | orchestrator | Saturday 17 May 2025 00:37:06 +0000 (0:00:00.037) 0:06:40.578 ********** 2025-05-17 00:37:06.377548 | orchestrator | 2025-05-17 00:37:06.378495 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-17 00:37:06.379251 | orchestrator | Saturday 17 May 2025 00:37:06 +0000 (0:00:00.046) 0:06:40.624 ********** 2025-05-17 00:37:06.379273 | orchestrator | 2025-05-17 00:37:06.380012 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-17 00:37:06.380196 | orchestrator | Saturday 17 May 2025 00:37:06 +0000 (0:00:00.038) 0:06:40.663 ********** 2025-05-17 00:37:06.381048 | orchestrator | 2025-05-17 00:37:06.381564 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-17 00:37:06.381870 | orchestrator | Saturday 17 May 2025 00:37:06 +0000 (0:00:00.038) 0:06:40.702 ********** 2025-05-17 00:37:06.382788 | orchestrator | 2025-05-17 00:37:06.382959 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-17 00:37:06.383123 | orchestrator | Saturday 17 May 2025 00:37:06 +0000 (0:00:00.045) 0:06:40.748 ********** 2025-05-17 00:37:06.383814 | orchestrator | 2025-05-17 00:37:06.387270 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-17 00:37:06.387345 | orchestrator | Saturday 17 May 2025 00:37:06 +0000 (0:00:00.038) 0:06:40.786 ********** 2025-05-17 00:37:06.387361 | orchestrator | 2025-05-17 00:37:06.387373 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-17 00:37:06.387384 | orchestrator | Saturday 17 May 2025 00:37:06 +0000 (0:00:00.039) 0:06:40.826 ********** 2025-05-17 00:37:07.421299 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:07.421412 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:07.422008 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:07.422093 | orchestrator | 2025-05-17 00:37:07.422377 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-17 00:37:07.422992 | orchestrator | Saturday 17 May 2025 00:37:07 +0000 (0:00:01.052) 0:06:41.879 ********** 2025-05-17 00:37:08.831355 | orchestrator | changed: [testbed-manager] 2025-05-17 00:37:08.831720 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:37:08.832877 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:37:08.834011 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:37:08.835225 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:37:08.835514 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:37:08.836294 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:37:08.836979 | orchestrator | 2025-05-17 00:37:08.837656 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-17 00:37:08.838266 | orchestrator | Saturday 17 May 2025 00:37:08 +0000 (0:00:01.409) 0:06:43.289 ********** 2025-05-17 00:37:09.907967 | orchestrator | changed: [testbed-manager] 2025-05-17 00:37:09.908183 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:37:09.909010 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:37:09.910688 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:37:09.911057 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:37:09.911375 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:37:09.912156 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:37:09.912654 | orchestrator | 2025-05-17 00:37:09.913473 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-17 00:37:09.913966 | orchestrator | Saturday 17 May 2025 00:37:09 +0000 (0:00:01.075) 0:06:44.364 ********** 2025-05-17 00:37:10.039458 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:37:11.865157 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:37:11.865329 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:37:11.865943 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:37:11.866210 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:37:11.867605 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:37:11.867646 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:37:11.867994 | orchestrator | 2025-05-17 00:37:11.868024 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-17 00:37:11.868315 | orchestrator | Saturday 17 May 2025 00:37:11 +0000 (0:00:01.960) 0:06:46.324 ********** 2025-05-17 00:37:11.965590 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:37:11.965771 | orchestrator | 2025-05-17 00:37:11.966276 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-17 00:37:11.967012 | orchestrator | Saturday 17 May 2025 00:37:11 +0000 (0:00:00.099) 0:06:46.423 ********** 2025-05-17 00:37:12.963835 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:12.963998 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:37:12.964085 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:37:12.964682 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:37:12.965749 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:37:12.966351 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:37:12.966855 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:37:12.967331 | orchestrator | 2025-05-17 00:37:12.968393 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-17 00:37:12.968821 | orchestrator | Saturday 17 May 2025 00:37:12 +0000 (0:00:00.995) 0:06:47.419 ********** 2025-05-17 00:37:13.084391 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:37:13.145039 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:37:13.213463 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:37:13.269146 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:37:13.328027 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:37:13.589617 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:37:13.589872 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:37:13.593063 | orchestrator | 2025-05-17 00:37:13.593103 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-17 00:37:13.596002 | orchestrator | Saturday 17 May 2025 00:37:13 +0000 (0:00:00.628) 0:06:48.048 ********** 2025-05-17 00:37:14.458006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:37:14.458251 | orchestrator | 2025-05-17 00:37:14.461093 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-17 00:37:14.461994 | orchestrator | Saturday 17 May 2025 00:37:14 +0000 (0:00:00.866) 0:06:48.914 ********** 2025-05-17 00:37:15.290093 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:15.290205 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:15.292014 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:15.292818 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:15.293548 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:15.294174 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:15.294503 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:15.295164 | orchestrator | 2025-05-17 00:37:15.295526 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-17 00:37:15.296087 | orchestrator | Saturday 17 May 2025 00:37:15 +0000 (0:00:00.832) 0:06:49.747 ********** 2025-05-17 00:37:17.826452 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-17 00:37:17.826927 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-17 00:37:17.828150 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-17 00:37:17.829770 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-17 00:37:17.830766 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-17 00:37:17.831862 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-17 00:37:17.832191 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-17 00:37:17.834147 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-17 00:37:17.836530 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-17 00:37:17.837806 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-17 00:37:17.839020 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-17 00:37:17.840242 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-17 00:37:17.840604 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-17 00:37:17.841078 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-17 00:37:17.842309 | orchestrator | 2025-05-17 00:37:17.842714 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-17 00:37:17.843519 | orchestrator | Saturday 17 May 2025 00:37:17 +0000 (0:00:02.535) 0:06:52.282 ********** 2025-05-17 00:37:17.963722 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:37:18.026984 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:37:18.102296 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:37:18.164829 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:37:18.240417 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:37:18.344890 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:37:18.345068 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:37:18.347733 | orchestrator | 2025-05-17 00:37:18.348126 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-17 00:37:18.348881 | orchestrator | Saturday 17 May 2025 00:37:18 +0000 (0:00:00.522) 0:06:52.805 ********** 2025-05-17 00:37:19.134978 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:37:19.135085 | orchestrator | 2025-05-17 00:37:19.136766 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-17 00:37:19.138162 | orchestrator | Saturday 17 May 2025 00:37:19 +0000 (0:00:00.785) 0:06:53.591 ********** 2025-05-17 00:37:19.921262 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:19.921939 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:19.923292 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:19.924462 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:19.925478 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:19.926411 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:19.927367 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:19.928154 | orchestrator | 2025-05-17 00:37:19.928402 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-17 00:37:19.929392 | orchestrator | Saturday 17 May 2025 00:37:19 +0000 (0:00:00.786) 0:06:54.378 ********** 2025-05-17 00:37:20.387890 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:20.451079 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:20.522224 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:20.918229 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:20.918896 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:20.919397 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:20.923216 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:20.923242 | orchestrator | 2025-05-17 00:37:20.923255 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-17 00:37:20.923269 | orchestrator | Saturday 17 May 2025 00:37:20 +0000 (0:00:00.997) 0:06:55.375 ********** 2025-05-17 00:37:21.043159 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:37:21.112445 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:37:21.174384 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:37:21.240408 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:37:21.303110 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:37:21.397090 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:37:21.397687 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:37:21.398176 | orchestrator | 2025-05-17 00:37:21.398959 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-17 00:37:21.402444 | orchestrator | Saturday 17 May 2025 00:37:21 +0000 (0:00:00.479) 0:06:55.855 ********** 2025-05-17 00:37:22.749499 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:22.749773 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:22.749901 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:22.751425 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:22.751813 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:22.752512 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:22.754093 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:22.754799 | orchestrator | 2025-05-17 00:37:22.755628 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-17 00:37:22.756257 | orchestrator | Saturday 17 May 2025 00:37:22 +0000 (0:00:01.353) 0:06:57.208 ********** 2025-05-17 00:37:22.874722 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:37:22.942077 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:37:23.002978 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:37:23.064104 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:37:23.130391 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:37:23.222728 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:37:23.222905 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:37:23.224097 | orchestrator | 2025-05-17 00:37:23.224716 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-17 00:37:23.225284 | orchestrator | Saturday 17 May 2025 00:37:23 +0000 (0:00:00.472) 0:06:57.681 ********** 2025-05-17 00:37:24.909315 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:24.909490 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:24.909856 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:24.910846 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:24.912606 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:24.913312 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:24.913920 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:24.915906 | orchestrator | 2025-05-17 00:37:24.916894 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-17 00:37:24.917883 | orchestrator | Saturday 17 May 2025 00:37:24 +0000 (0:00:01.684) 0:06:59.366 ********** 2025-05-17 00:37:26.449502 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:26.450188 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:37:26.450926 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:37:26.454194 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:37:26.454228 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:37:26.454246 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:37:26.455697 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:37:26.457045 | orchestrator | 2025-05-17 00:37:26.457600 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-17 00:37:26.458895 | orchestrator | Saturday 17 May 2025 00:37:26 +0000 (0:00:01.542) 0:07:00.908 ********** 2025-05-17 00:37:28.115478 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:28.115581 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:37:28.115808 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:37:28.117113 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:37:28.117327 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:37:28.118446 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:37:28.118888 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:37:28.119522 | orchestrator | 2025-05-17 00:37:28.120232 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-17 00:37:28.120556 | orchestrator | Saturday 17 May 2025 00:37:28 +0000 (0:00:01.663) 0:07:02.572 ********** 2025-05-17 00:37:29.709694 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:29.709947 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:37:29.710902 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:37:29.714415 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:37:29.714500 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:37:29.715195 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:37:29.718242 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:37:29.718348 | orchestrator | 2025-05-17 00:37:29.720947 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-17 00:37:29.721870 | orchestrator | Saturday 17 May 2025 00:37:29 +0000 (0:00:01.594) 0:07:04.166 ********** 2025-05-17 00:37:30.701496 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:30.702903 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:30.704257 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:30.704702 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:30.705322 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:30.706269 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:30.706787 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:30.707756 | orchestrator | 2025-05-17 00:37:30.708122 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-17 00:37:30.708482 | orchestrator | Saturday 17 May 2025 00:37:30 +0000 (0:00:00.992) 0:07:05.159 ********** 2025-05-17 00:37:30.835861 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:37:30.895142 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:37:30.953004 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:37:31.018983 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:37:31.081096 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:37:31.459369 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:37:31.459564 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:37:31.459927 | orchestrator | 2025-05-17 00:37:31.460789 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-17 00:37:31.461636 | orchestrator | Saturday 17 May 2025 00:37:31 +0000 (0:00:00.759) 0:07:05.918 ********** 2025-05-17 00:37:31.589163 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:37:31.676618 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:37:31.740282 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:37:31.802392 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:37:31.871968 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:37:31.971447 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:37:31.971569 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:37:31.971584 | orchestrator | 2025-05-17 00:37:31.971598 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-17 00:37:31.972438 | orchestrator | Saturday 17 May 2025 00:37:31 +0000 (0:00:00.505) 0:07:06.424 ********** 2025-05-17 00:37:32.100204 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:32.161444 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:32.224489 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:32.292676 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:32.354226 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:32.455140 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:32.455352 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:32.456208 | orchestrator | 2025-05-17 00:37:32.457340 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-17 00:37:32.458402 | orchestrator | Saturday 17 May 2025 00:37:32 +0000 (0:00:00.488) 0:07:06.912 ********** 2025-05-17 00:37:32.595986 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:32.660866 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:32.906080 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:32.968773 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:33.030081 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:33.152259 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:33.152864 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:33.154091 | orchestrator | 2025-05-17 00:37:33.155263 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-17 00:37:33.156030 | orchestrator | Saturday 17 May 2025 00:37:33 +0000 (0:00:00.698) 0:07:07.610 ********** 2025-05-17 00:37:33.295862 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:33.375057 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:33.450394 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:33.513965 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:33.603401 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:33.706225 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:33.707897 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:33.708756 | orchestrator | 2025-05-17 00:37:33.709460 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-17 00:37:33.710488 | orchestrator | Saturday 17 May 2025 00:37:33 +0000 (0:00:00.553) 0:07:08.164 ********** 2025-05-17 00:37:40.272419 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:40.272537 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:40.274994 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:40.275760 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:40.276174 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:40.277422 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:40.277812 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:40.278377 | orchestrator | 2025-05-17 00:37:40.280522 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-17 00:37:40.281407 | orchestrator | Saturday 17 May 2025 00:37:40 +0000 (0:00:06.565) 0:07:14.729 ********** 2025-05-17 00:37:40.469418 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:37:40.529618 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:37:40.597078 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:37:40.656046 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:37:40.763026 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:37:40.765798 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:37:40.766630 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:37:40.770186 | orchestrator | 2025-05-17 00:37:40.770216 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-17 00:37:40.770230 | orchestrator | Saturday 17 May 2025 00:37:40 +0000 (0:00:00.490) 0:07:15.220 ********** 2025-05-17 00:37:41.717844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:37:41.718531 | orchestrator | 2025-05-17 00:37:41.718874 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-17 00:37:41.722317 | orchestrator | Saturday 17 May 2025 00:37:41 +0000 (0:00:00.955) 0:07:16.175 ********** 2025-05-17 00:37:43.396473 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:43.396570 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:43.398184 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:43.407460 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:43.407540 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:43.407606 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:43.408419 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:43.408626 | orchestrator | 2025-05-17 00:37:43.409060 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-17 00:37:43.409283 | orchestrator | Saturday 17 May 2025 00:37:43 +0000 (0:00:01.676) 0:07:17.852 ********** 2025-05-17 00:37:44.499366 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:44.499528 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:44.501714 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:44.503236 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:44.504085 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:44.505096 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:44.506095 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:44.507298 | orchestrator | 2025-05-17 00:37:44.508128 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-17 00:37:44.508799 | orchestrator | Saturday 17 May 2025 00:37:44 +0000 (0:00:01.105) 0:07:18.957 ********** 2025-05-17 00:37:44.888865 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:45.301805 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:45.303932 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:45.303976 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:45.303989 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:45.304369 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:45.307147 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:45.307174 | orchestrator | 2025-05-17 00:37:45.307188 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-17 00:37:45.307202 | orchestrator | Saturday 17 May 2025 00:37:45 +0000 (0:00:00.803) 0:07:19.760 ********** 2025-05-17 00:37:47.165287 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-17 00:37:47.170132 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-17 00:37:47.170200 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-17 00:37:47.170214 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-17 00:37:47.170289 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-17 00:37:47.173338 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-17 00:37:47.174278 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-17 00:37:47.175984 | orchestrator | 2025-05-17 00:37:47.176275 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-17 00:37:47.177506 | orchestrator | Saturday 17 May 2025 00:37:47 +0000 (0:00:01.861) 0:07:21.622 ********** 2025-05-17 00:37:47.947444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:37:47.948605 | orchestrator | 2025-05-17 00:37:47.949898 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-17 00:37:47.952858 | orchestrator | Saturday 17 May 2025 00:37:47 +0000 (0:00:00.783) 0:07:22.405 ********** 2025-05-17 00:37:56.872541 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:37:56.876164 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:37:56.876214 | orchestrator | changed: [testbed-manager] 2025-05-17 00:37:56.878579 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:37:56.879548 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:37:56.880532 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:37:56.881802 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:37:56.886844 | orchestrator | 2025-05-17 00:37:56.887377 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-17 00:37:56.889246 | orchestrator | Saturday 17 May 2025 00:37:56 +0000 (0:00:08.923) 0:07:31.328 ********** 2025-05-17 00:37:59.651825 | orchestrator | ok: [testbed-manager] 2025-05-17 00:37:59.653415 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:37:59.656094 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:37:59.656128 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:37:59.656139 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:37:59.656616 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:37:59.657239 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:37:59.657786 | orchestrator | 2025-05-17 00:37:59.658925 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-17 00:37:59.658956 | orchestrator | Saturday 17 May 2025 00:37:59 +0000 (0:00:02.780) 0:07:34.109 ********** 2025-05-17 00:38:00.898827 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:38:00.899712 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:38:00.900307 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:38:00.901117 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:38:00.901662 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:38:00.902598 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:38:00.903389 | orchestrator | 2025-05-17 00:38:00.904076 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-17 00:38:00.905013 | orchestrator | Saturday 17 May 2025 00:38:00 +0000 (0:00:01.243) 0:07:35.352 ********** 2025-05-17 00:38:02.066604 | orchestrator | changed: [testbed-manager] 2025-05-17 00:38:02.067421 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:38:02.067499 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:38:02.067540 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:38:02.067593 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:38:02.071349 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:38:02.071429 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:38:02.071445 | orchestrator | 2025-05-17 00:38:02.071458 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-17 00:38:02.071471 | orchestrator | 2025-05-17 00:38:02.071482 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-17 00:38:02.071493 | orchestrator | Saturday 17 May 2025 00:38:02 +0000 (0:00:01.172) 0:07:36.524 ********** 2025-05-17 00:38:02.390937 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:38:02.444389 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:38:02.505019 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:38:02.564599 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:38:02.623730 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:38:02.735208 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:38:02.735656 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:38:02.736935 | orchestrator | 2025-05-17 00:38:02.737333 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-17 00:38:02.738356 | orchestrator | 2025-05-17 00:38:02.739214 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-17 00:38:02.740018 | orchestrator | Saturday 17 May 2025 00:38:02 +0000 (0:00:00.668) 0:07:37.192 ********** 2025-05-17 00:38:04.028108 | orchestrator | changed: [testbed-manager] 2025-05-17 00:38:04.028577 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:38:04.029134 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:38:04.033044 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:38:04.033447 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:38:04.034269 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:38:04.035241 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:38:04.036118 | orchestrator | 2025-05-17 00:38:04.039446 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-17 00:38:04.040128 | orchestrator | Saturday 17 May 2025 00:38:04 +0000 (0:00:01.292) 0:07:38.484 ********** 2025-05-17 00:38:05.429655 | orchestrator | ok: [testbed-manager] 2025-05-17 00:38:05.432202 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:38:05.432242 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:38:05.432255 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:38:05.432609 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:38:05.433376 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:38:05.434717 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:38:05.434746 | orchestrator | 2025-05-17 00:38:05.436027 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-17 00:38:05.436090 | orchestrator | Saturday 17 May 2025 00:38:05 +0000 (0:00:01.400) 0:07:39.885 ********** 2025-05-17 00:38:05.554693 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:38:05.615683 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:38:05.674006 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:38:05.888914 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:38:05.955600 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:38:06.357204 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:38:06.358355 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:38:06.359361 | orchestrator | 2025-05-17 00:38:06.362191 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-17 00:38:06.363019 | orchestrator | Saturday 17 May 2025 00:38:06 +0000 (0:00:00.928) 0:07:40.814 ********** 2025-05-17 00:38:07.543205 | orchestrator | changed: [testbed-manager] 2025-05-17 00:38:07.543822 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:38:07.545246 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:38:07.547816 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:38:07.548521 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:38:07.549588 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:38:07.550089 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:38:07.550885 | orchestrator | 2025-05-17 00:38:07.551379 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-17 00:38:07.552294 | orchestrator | 2025-05-17 00:38:07.552574 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-17 00:38:07.553115 | orchestrator | Saturday 17 May 2025 00:38:07 +0000 (0:00:01.187) 0:07:42.002 ********** 2025-05-17 00:38:08.332794 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:38:08.333687 | orchestrator | 2025-05-17 00:38:08.334522 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-17 00:38:08.335955 | orchestrator | Saturday 17 May 2025 00:38:08 +0000 (0:00:00.786) 0:07:42.788 ********** 2025-05-17 00:38:08.791875 | orchestrator | ok: [testbed-manager] 2025-05-17 00:38:08.860967 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:38:08.934068 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:38:09.352368 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:38:09.353092 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:38:09.353320 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:38:09.354156 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:38:09.354698 | orchestrator | 2025-05-17 00:38:09.355240 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-17 00:38:09.355894 | orchestrator | Saturday 17 May 2025 00:38:09 +0000 (0:00:01.021) 0:07:43.810 ********** 2025-05-17 00:38:10.445789 | orchestrator | changed: [testbed-manager] 2025-05-17 00:38:10.445888 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:38:10.446527 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:38:10.447435 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:38:10.448246 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:38:10.449203 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:38:10.449731 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:38:10.450680 | orchestrator | 2025-05-17 00:38:10.451178 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-17 00:38:10.451844 | orchestrator | Saturday 17 May 2025 00:38:10 +0000 (0:00:01.090) 0:07:44.900 ********** 2025-05-17 00:38:11.244561 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:38:11.245781 | orchestrator | 2025-05-17 00:38:11.246899 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-17 00:38:11.248069 | orchestrator | Saturday 17 May 2025 00:38:11 +0000 (0:00:00.800) 0:07:45.701 ********** 2025-05-17 00:38:11.693355 | orchestrator | ok: [testbed-manager] 2025-05-17 00:38:12.243675 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:38:12.244520 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:38:12.245384 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:38:12.247245 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:38:12.248383 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:38:12.249561 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:38:12.250203 | orchestrator | 2025-05-17 00:38:12.251240 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-17 00:38:12.252580 | orchestrator | Saturday 17 May 2025 00:38:12 +0000 (0:00:01.001) 0:07:46.702 ********** 2025-05-17 00:38:13.319961 | orchestrator | changed: [testbed-manager] 2025-05-17 00:38:13.320121 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:38:13.320534 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:38:13.321090 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:38:13.321637 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:38:13.322391 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:38:13.323014 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:38:13.323819 | orchestrator | 2025-05-17 00:38:13.324015 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:38:13.326145 | orchestrator | 2025-05-17 00:38:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:38:13.326175 | orchestrator | 2025-05-17 00:38:13 | INFO  | Please wait and do not abort execution. 2025-05-17 00:38:13.326609 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-17 00:38:13.327748 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-17 00:38:13.328514 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-17 00:38:13.330748 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-17 00:38:13.331792 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-17 00:38:13.332464 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-17 00:38:13.333185 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-17 00:38:13.336131 | orchestrator | 2025-05-17 00:38:13.340275 | orchestrator | Saturday 17 May 2025 00:38:13 +0000 (0:00:01.074) 0:07:47.777 ********** 2025-05-17 00:38:13.340302 | orchestrator | =============================================================================== 2025-05-17 00:38:13.340314 | orchestrator | osism.commons.packages : Install required packages --------------------- 81.40s 2025-05-17 00:38:13.340326 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.65s 2025-05-17 00:38:13.340337 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.70s 2025-05-17 00:38:13.340349 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.74s 2025-05-17 00:38:13.340360 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.41s 2025-05-17 00:38:13.341180 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.15s 2025-05-17 00:38:13.342559 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.40s 2025-05-17 00:38:13.342657 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.40s 2025-05-17 00:38:13.344162 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.52s 2025-05-17 00:38:13.345929 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.92s 2025-05-17 00:38:13.346963 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 7.99s 2025-05-17 00:38:13.347149 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.93s 2025-05-17 00:38:13.347965 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.52s 2025-05-17 00:38:13.348900 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.31s 2025-05-17 00:38:13.349499 | orchestrator | osism.services.docker : Add repository ---------------------------------- 6.88s 2025-05-17 00:38:13.350201 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 6.57s 2025-05-17 00:38:13.350504 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.01s 2025-05-17 00:38:13.351144 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 5.87s 2025-05-17 00:38:13.351444 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.84s 2025-05-17 00:38:13.351875 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.77s 2025-05-17 00:38:13.967335 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-17 00:38:13.967439 | orchestrator | + osism apply network 2025-05-17 00:38:15.821242 | orchestrator | 2025-05-17 00:38:15 | INFO  | Task 07272be2-9ff9-4fb1-af83-a09e8dd0c8f9 (network) was prepared for execution. 2025-05-17 00:38:15.821347 | orchestrator | 2025-05-17 00:38:15 | INFO  | It takes a moment until task 07272be2-9ff9-4fb1-af83-a09e8dd0c8f9 (network) has been started and output is visible here. 2025-05-17 00:38:19.139490 | orchestrator | 2025-05-17 00:38:19.140408 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-17 00:38:19.140612 | orchestrator | 2025-05-17 00:38:19.143173 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-17 00:38:19.143607 | orchestrator | Saturday 17 May 2025 00:38:19 +0000 (0:00:00.213) 0:00:00.213 ********** 2025-05-17 00:38:19.285898 | orchestrator | ok: [testbed-manager] 2025-05-17 00:38:19.359698 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:38:19.433530 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:38:19.509683 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:38:19.588027 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:38:19.812376 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:38:19.812581 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:38:19.813252 | orchestrator | 2025-05-17 00:38:19.813849 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-17 00:38:19.814467 | orchestrator | Saturday 17 May 2025 00:38:19 +0000 (0:00:00.674) 0:00:00.887 ********** 2025-05-17 00:38:21.018916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:38:21.019022 | orchestrator | 2025-05-17 00:38:21.019154 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-17 00:38:21.023110 | orchestrator | Saturday 17 May 2025 00:38:21 +0000 (0:00:01.204) 0:00:02.092 ********** 2025-05-17 00:38:23.053688 | orchestrator | ok: [testbed-manager] 2025-05-17 00:38:23.053912 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:38:23.054163 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:38:23.055572 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:38:23.059430 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:38:23.060693 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:38:23.061186 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:38:23.064540 | orchestrator | 2025-05-17 00:38:23.065165 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-17 00:38:23.065865 | orchestrator | Saturday 17 May 2025 00:38:23 +0000 (0:00:02.034) 0:00:04.127 ********** 2025-05-17 00:38:24.804491 | orchestrator | ok: [testbed-manager] 2025-05-17 00:38:24.804602 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:38:24.808361 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:38:24.809389 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:38:24.811084 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:38:24.811112 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:38:24.814533 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:38:24.814939 | orchestrator | 2025-05-17 00:38:24.816287 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-17 00:38:24.816442 | orchestrator | Saturday 17 May 2025 00:38:24 +0000 (0:00:01.748) 0:00:05.875 ********** 2025-05-17 00:38:25.361697 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-17 00:38:25.361816 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-17 00:38:25.362601 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-17 00:38:25.968449 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-17 00:38:25.972561 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-17 00:38:25.972598 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-17 00:38:25.972611 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-17 00:38:25.972623 | orchestrator | 2025-05-17 00:38:25.972665 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-17 00:38:25.973074 | orchestrator | Saturday 17 May 2025 00:38:25 +0000 (0:00:01.165) 0:00:07.041 ********** 2025-05-17 00:38:27.652154 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-17 00:38:27.652323 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-17 00:38:27.652757 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-17 00:38:27.653778 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-17 00:38:27.653800 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-17 00:38:27.654297 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-17 00:38:27.655133 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-17 00:38:27.655675 | orchestrator | 2025-05-17 00:38:27.656047 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-17 00:38:27.656755 | orchestrator | Saturday 17 May 2025 00:38:27 +0000 (0:00:01.686) 0:00:08.727 ********** 2025-05-17 00:38:29.359025 | orchestrator | changed: [testbed-manager] 2025-05-17 00:38:29.359950 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:38:29.362511 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:38:29.363104 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:38:29.364015 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:38:29.364712 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:38:29.365233 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:38:29.366124 | orchestrator | 2025-05-17 00:38:29.366852 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-17 00:38:29.367211 | orchestrator | Saturday 17 May 2025 00:38:29 +0000 (0:00:01.702) 0:00:10.430 ********** 2025-05-17 00:38:29.812885 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-17 00:38:30.343944 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-17 00:38:30.344625 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-17 00:38:30.346144 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-17 00:38:30.346831 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-17 00:38:30.347837 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-17 00:38:30.348418 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-17 00:38:30.349601 | orchestrator | 2025-05-17 00:38:30.350233 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-17 00:38:30.351249 | orchestrator | Saturday 17 May 2025 00:38:30 +0000 (0:00:00.990) 0:00:11.420 ********** 2025-05-17 00:38:30.789697 | orchestrator | ok: [testbed-manager] 2025-05-17 00:38:30.876538 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:38:31.496768 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:38:31.498184 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:38:31.501387 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:38:31.501427 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:38:31.501439 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:38:31.501452 | orchestrator | 2025-05-17 00:38:31.502446 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-17 00:38:31.505088 | orchestrator | Saturday 17 May 2025 00:38:31 +0000 (0:00:01.148) 0:00:12.569 ********** 2025-05-17 00:38:31.659631 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:38:31.737384 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:38:31.826959 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:38:31.903609 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:38:31.983593 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:38:32.263742 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:38:32.264688 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:38:32.265447 | orchestrator | 2025-05-17 00:38:32.269473 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-17 00:38:32.269513 | orchestrator | Saturday 17 May 2025 00:38:32 +0000 (0:00:00.769) 0:00:13.338 ********** 2025-05-17 00:38:34.302988 | orchestrator | ok: [testbed-manager] 2025-05-17 00:38:34.304016 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:38:34.305926 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:38:34.306699 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:38:34.308136 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:38:34.309158 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:38:34.310271 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:38:34.311025 | orchestrator | 2025-05-17 00:38:34.311862 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-17 00:38:34.312606 | orchestrator | Saturday 17 May 2025 00:38:34 +0000 (0:00:02.040) 0:00:15.378 ********** 2025-05-17 00:38:35.165768 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-17 00:38:36.377274 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-17 00:38:36.377451 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-17 00:38:36.378120 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-17 00:38:36.378221 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-17 00:38:36.379331 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-17 00:38:36.383521 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-17 00:38:36.383573 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-17 00:38:36.383583 | orchestrator | 2025-05-17 00:38:36.383591 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-17 00:38:36.383599 | orchestrator | Saturday 17 May 2025 00:38:36 +0000 (0:00:02.071) 0:00:17.450 ********** 2025-05-17 00:38:37.882157 | orchestrator | ok: [testbed-manager] 2025-05-17 00:38:37.882261 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:38:37.885131 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:38:37.885811 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:38:37.886638 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:38:37.887488 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:38:37.888220 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:38:37.888627 | orchestrator | 2025-05-17 00:38:37.889233 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-17 00:38:37.889886 | orchestrator | Saturday 17 May 2025 00:38:37 +0000 (0:00:01.506) 0:00:18.956 ********** 2025-05-17 00:38:39.259165 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:38:39.262404 | orchestrator | 2025-05-17 00:38:39.262440 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-17 00:38:39.262454 | orchestrator | Saturday 17 May 2025 00:38:39 +0000 (0:00:01.374) 0:00:20.331 ********** 2025-05-17 00:38:39.785722 | orchestrator | ok: [testbed-manager] 2025-05-17 00:38:40.205419 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:38:40.206334 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:38:40.207250 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:38:40.209471 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:38:40.209512 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:38:40.210068 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:38:40.211436 | orchestrator | 2025-05-17 00:38:40.212045 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-17 00:38:40.212617 | orchestrator | Saturday 17 May 2025 00:38:40 +0000 (0:00:00.950) 0:00:21.281 ********** 2025-05-17 00:38:40.369620 | orchestrator | ok: [testbed-manager] 2025-05-17 00:38:40.452617 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:38:40.681042 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:38:40.761640 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:38:40.845013 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:38:40.974466 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:38:40.977917 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:38:40.977979 | orchestrator | 2025-05-17 00:38:40.984621 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-17 00:38:40.984638 | orchestrator | Saturday 17 May 2025 00:38:40 +0000 (0:00:00.766) 0:00:22.048 ********** 2025-05-17 00:38:41.409359 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-17 00:38:41.412940 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-17 00:38:41.493290 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-17 00:38:41.493893 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-17 00:38:41.960997 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-17 00:38:41.962968 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-17 00:38:41.965506 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-17 00:38:41.965533 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-17 00:38:41.966233 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-17 00:38:41.966609 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-17 00:38:41.967694 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-17 00:38:41.968951 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-17 00:38:41.969896 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-17 00:38:41.970724 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-17 00:38:41.971957 | orchestrator | 2025-05-17 00:38:41.973033 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-17 00:38:41.973390 | orchestrator | Saturday 17 May 2025 00:38:41 +0000 (0:00:00.985) 0:00:23.033 ********** 2025-05-17 00:38:42.277490 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:38:42.378198 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:38:42.462280 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:38:42.551033 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:38:42.634875 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:38:43.774711 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:38:43.775203 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:38:43.777915 | orchestrator | 2025-05-17 00:38:43.777940 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-17 00:38:43.778518 | orchestrator | Saturday 17 May 2025 00:38:43 +0000 (0:00:01.814) 0:00:24.847 ********** 2025-05-17 00:38:43.927284 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:38:44.010318 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:38:44.260170 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:38:44.343721 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:38:44.424720 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:38:44.471477 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:38:44.471993 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:38:44.472599 | orchestrator | 2025-05-17 00:38:44.473441 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:38:44.473707 | orchestrator | 2025-05-17 00:38:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:38:44.473932 | orchestrator | 2025-05-17 00:38:44 | INFO  | Please wait and do not abort execution. 2025-05-17 00:38:44.475014 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-17 00:38:44.476129 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-17 00:38:44.476794 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-17 00:38:44.477316 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-17 00:38:44.478181 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-17 00:38:44.478536 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-17 00:38:44.479297 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-17 00:38:44.481379 | orchestrator | 2025-05-17 00:38:44.481944 | orchestrator | Saturday 17 May 2025 00:38:44 +0000 (0:00:00.699) 0:00:25.547 ********** 2025-05-17 00:38:44.482348 | orchestrator | =============================================================================== 2025-05-17 00:38:44.482824 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 2.07s 2025-05-17 00:38:44.483598 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.04s 2025-05-17 00:38:44.483885 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.03s 2025-05-17 00:38:44.484301 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.81s 2025-05-17 00:38:44.484831 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.75s 2025-05-17 00:38:44.485340 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.70s 2025-05-17 00:38:44.485763 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.69s 2025-05-17 00:38:44.486199 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.51s 2025-05-17 00:38:44.486671 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.37s 2025-05-17 00:38:44.487090 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.20s 2025-05-17 00:38:44.487558 | orchestrator | osism.commons.network : Create required directories --------------------- 1.17s 2025-05-17 00:38:44.487906 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.15s 2025-05-17 00:38:44.488215 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 0.99s 2025-05-17 00:38:44.488740 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 0.99s 2025-05-17 00:38:44.489007 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.95s 2025-05-17 00:38:44.489369 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.77s 2025-05-17 00:38:44.489822 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.77s 2025-05-17 00:38:44.490183 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.70s 2025-05-17 00:38:44.490486 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.67s 2025-05-17 00:38:44.991143 | orchestrator | + osism apply wireguard 2025-05-17 00:38:46.395609 | orchestrator | 2025-05-17 00:38:46 | INFO  | Task 9a1408ae-5bf6-4fee-9b28-4c7623bef469 (wireguard) was prepared for execution. 2025-05-17 00:38:46.395744 | orchestrator | 2025-05-17 00:38:46 | INFO  | It takes a moment until task 9a1408ae-5bf6-4fee-9b28-4c7623bef469 (wireguard) has been started and output is visible here. 2025-05-17 00:38:49.519149 | orchestrator | 2025-05-17 00:38:49.519254 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-17 00:38:49.519997 | orchestrator | 2025-05-17 00:38:49.520790 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-17 00:38:49.521194 | orchestrator | Saturday 17 May 2025 00:38:49 +0000 (0:00:00.165) 0:00:00.165 ********** 2025-05-17 00:38:50.967535 | orchestrator | ok: [testbed-manager] 2025-05-17 00:38:50.967645 | orchestrator | 2025-05-17 00:38:50.967716 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-17 00:38:50.967732 | orchestrator | Saturday 17 May 2025 00:38:50 +0000 (0:00:01.450) 0:00:01.615 ********** 2025-05-17 00:38:57.209722 | orchestrator | changed: [testbed-manager] 2025-05-17 00:38:57.211019 | orchestrator | 2025-05-17 00:38:57.211272 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-17 00:38:57.211947 | orchestrator | Saturday 17 May 2025 00:38:57 +0000 (0:00:06.244) 0:00:07.860 ********** 2025-05-17 00:38:57.743549 | orchestrator | changed: [testbed-manager] 2025-05-17 00:38:57.743774 | orchestrator | 2025-05-17 00:38:57.743796 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-17 00:38:57.743881 | orchestrator | Saturday 17 May 2025 00:38:57 +0000 (0:00:00.536) 0:00:08.396 ********** 2025-05-17 00:38:58.175251 | orchestrator | changed: [testbed-manager] 2025-05-17 00:38:58.176360 | orchestrator | 2025-05-17 00:38:58.176389 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-17 00:38:58.176595 | orchestrator | Saturday 17 May 2025 00:38:58 +0000 (0:00:00.430) 0:00:08.827 ********** 2025-05-17 00:38:58.690898 | orchestrator | ok: [testbed-manager] 2025-05-17 00:38:58.691742 | orchestrator | 2025-05-17 00:38:58.692336 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-17 00:38:58.693270 | orchestrator | Saturday 17 May 2025 00:38:58 +0000 (0:00:00.513) 0:00:09.341 ********** 2025-05-17 00:38:59.218319 | orchestrator | ok: [testbed-manager] 2025-05-17 00:38:59.218564 | orchestrator | 2025-05-17 00:38:59.219617 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-17 00:38:59.220190 | orchestrator | Saturday 17 May 2025 00:38:59 +0000 (0:00:00.529) 0:00:09.870 ********** 2025-05-17 00:38:59.642727 | orchestrator | ok: [testbed-manager] 2025-05-17 00:38:59.643648 | orchestrator | 2025-05-17 00:38:59.644927 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-17 00:38:59.644993 | orchestrator | Saturday 17 May 2025 00:38:59 +0000 (0:00:00.423) 0:00:10.294 ********** 2025-05-17 00:39:00.883046 | orchestrator | changed: [testbed-manager] 2025-05-17 00:39:00.884526 | orchestrator | 2025-05-17 00:39:00.884562 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-17 00:39:00.885010 | orchestrator | Saturday 17 May 2025 00:39:00 +0000 (0:00:01.238) 0:00:11.533 ********** 2025-05-17 00:39:01.819782 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-17 00:39:01.821193 | orchestrator | changed: [testbed-manager] 2025-05-17 00:39:01.824875 | orchestrator | 2025-05-17 00:39:01.825665 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-17 00:39:01.828688 | orchestrator | Saturday 17 May 2025 00:39:01 +0000 (0:00:00.936) 0:00:12.470 ********** 2025-05-17 00:39:03.549138 | orchestrator | changed: [testbed-manager] 2025-05-17 00:39:03.549255 | orchestrator | 2025-05-17 00:39:03.549933 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-17 00:39:03.550510 | orchestrator | Saturday 17 May 2025 00:39:03 +0000 (0:00:01.727) 0:00:14.198 ********** 2025-05-17 00:39:04.464017 | orchestrator | changed: [testbed-manager] 2025-05-17 00:39:04.465049 | orchestrator | 2025-05-17 00:39:04.465456 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:39:04.466190 | orchestrator | 2025-05-17 00:39:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:39:04.466439 | orchestrator | 2025-05-17 00:39:04 | INFO  | Please wait and do not abort execution. 2025-05-17 00:39:04.467382 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:39:04.467846 | orchestrator | 2025-05-17 00:39:04.468290 | orchestrator | Saturday 17 May 2025 00:39:04 +0000 (0:00:00.917) 0:00:15.116 ********** 2025-05-17 00:39:04.469197 | orchestrator | =============================================================================== 2025-05-17 00:39:04.469453 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.24s 2025-05-17 00:39:04.470097 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.73s 2025-05-17 00:39:04.470468 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.45s 2025-05-17 00:39:04.471007 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.24s 2025-05-17 00:39:04.471415 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.94s 2025-05-17 00:39:04.471969 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.92s 2025-05-17 00:39:04.472328 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-05-17 00:39:04.472579 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-05-17 00:39:04.472948 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.51s 2025-05-17 00:39:04.473241 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-05-17 00:39:04.473664 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-05-17 00:39:04.959379 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-17 00:39:04.988971 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-17 00:39:04.989055 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-17 00:39:05.072164 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 168 0 --:--:-- --:--:-- --:--:-- 168 2025-05-17 00:39:05.080389 | orchestrator | + osism apply --environment custom workarounds 2025-05-17 00:39:06.462506 | orchestrator | 2025-05-17 00:39:06 | INFO  | Trying to run play workarounds in environment custom 2025-05-17 00:39:06.508228 | orchestrator | 2025-05-17 00:39:06 | INFO  | Task 805c5f46-7bd5-4fe5-b2d5-9c66afde5457 (workarounds) was prepared for execution. 2025-05-17 00:39:06.508324 | orchestrator | 2025-05-17 00:39:06 | INFO  | It takes a moment until task 805c5f46-7bd5-4fe5-b2d5-9c66afde5457 (workarounds) has been started and output is visible here. 2025-05-17 00:39:09.545275 | orchestrator | 2025-05-17 00:39:09.545388 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 00:39:09.546883 | orchestrator | 2025-05-17 00:39:09.548747 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-17 00:39:09.550480 | orchestrator | Saturday 17 May 2025 00:39:09 +0000 (0:00:00.149) 0:00:00.149 ********** 2025-05-17 00:39:09.706923 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-17 00:39:09.787450 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-17 00:39:09.867578 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-17 00:39:09.950446 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-17 00:39:10.032342 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-17 00:39:10.276959 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-17 00:39:10.277060 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-17 00:39:10.278072 | orchestrator | 2025-05-17 00:39:10.278544 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-17 00:39:10.279905 | orchestrator | 2025-05-17 00:39:10.283225 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-17 00:39:10.283432 | orchestrator | Saturday 17 May 2025 00:39:10 +0000 (0:00:00.733) 0:00:00.882 ********** 2025-05-17 00:39:12.730775 | orchestrator | ok: [testbed-manager] 2025-05-17 00:39:12.732116 | orchestrator | 2025-05-17 00:39:12.734721 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-17 00:39:12.735218 | orchestrator | 2025-05-17 00:39:12.736182 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-17 00:39:12.736205 | orchestrator | Saturday 17 May 2025 00:39:12 +0000 (0:00:02.448) 0:00:03.331 ********** 2025-05-17 00:39:14.472837 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:39:14.476626 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:39:14.476710 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:39:14.476727 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:39:14.477337 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:39:14.478386 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:39:14.479262 | orchestrator | 2025-05-17 00:39:14.479367 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-17 00:39:14.480366 | orchestrator | 2025-05-17 00:39:14.481069 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-17 00:39:14.481492 | orchestrator | Saturday 17 May 2025 00:39:14 +0000 (0:00:01.745) 0:00:05.077 ********** 2025-05-17 00:39:15.950139 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-17 00:39:15.950337 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-17 00:39:15.955063 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-17 00:39:15.955616 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-17 00:39:15.956294 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-17 00:39:15.956891 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-17 00:39:15.957571 | orchestrator | 2025-05-17 00:39:15.958501 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-17 00:39:15.959066 | orchestrator | Saturday 17 May 2025 00:39:15 +0000 (0:00:01.475) 0:00:06.553 ********** 2025-05-17 00:39:19.685628 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:39:19.685807 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:39:19.685825 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:39:19.685838 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:39:19.685849 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:39:19.685963 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:39:19.686091 | orchestrator | 2025-05-17 00:39:19.686830 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-17 00:39:19.687162 | orchestrator | Saturday 17 May 2025 00:39:19 +0000 (0:00:03.727) 0:00:10.280 ********** 2025-05-17 00:39:19.831418 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:39:19.902268 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:39:19.976136 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:39:20.199916 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:39:20.341938 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:39:20.342757 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:39:20.343308 | orchestrator | 2025-05-17 00:39:20.343941 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-17 00:39:20.344768 | orchestrator | 2025-05-17 00:39:20.345870 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-17 00:39:20.346162 | orchestrator | Saturday 17 May 2025 00:39:20 +0000 (0:00:00.668) 0:00:10.949 ********** 2025-05-17 00:39:21.966764 | orchestrator | changed: [testbed-manager] 2025-05-17 00:39:21.967030 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:39:21.968140 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:39:21.969119 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:39:21.969822 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:39:21.971640 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:39:21.972830 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:39:21.973251 | orchestrator | 2025-05-17 00:39:21.973743 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-17 00:39:21.974338 | orchestrator | Saturday 17 May 2025 00:39:21 +0000 (0:00:01.623) 0:00:12.572 ********** 2025-05-17 00:39:23.508212 | orchestrator | changed: [testbed-manager] 2025-05-17 00:39:23.508464 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:39:23.509752 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:39:23.510945 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:39:23.511412 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:39:23.512300 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:39:23.512887 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:39:23.513435 | orchestrator | 2025-05-17 00:39:23.514072 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-17 00:39:23.514381 | orchestrator | Saturday 17 May 2025 00:39:23 +0000 (0:00:01.537) 0:00:14.110 ********** 2025-05-17 00:39:24.950945 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:39:24.951954 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:39:24.951996 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:39:24.952099 | orchestrator | ok: [testbed-manager] 2025-05-17 00:39:24.953429 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:39:24.953822 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:39:24.956726 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:39:24.956877 | orchestrator | 2025-05-17 00:39:24.957482 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-17 00:39:24.957776 | orchestrator | Saturday 17 May 2025 00:39:24 +0000 (0:00:01.441) 0:00:15.552 ********** 2025-05-17 00:39:26.711201 | orchestrator | changed: [testbed-manager] 2025-05-17 00:39:26.714982 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:39:26.715036 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:39:26.715048 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:39:26.715154 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:39:26.715777 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:39:26.716222 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:39:26.716653 | orchestrator | 2025-05-17 00:39:26.717891 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-17 00:39:26.718800 | orchestrator | Saturday 17 May 2025 00:39:26 +0000 (0:00:01.765) 0:00:17.317 ********** 2025-05-17 00:39:26.868199 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:39:26.939547 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:39:27.023992 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:39:27.096355 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:39:27.329539 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:39:27.474175 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:39:27.474411 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:39:27.475071 | orchestrator | 2025-05-17 00:39:27.475559 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-17 00:39:27.476254 | orchestrator | 2025-05-17 00:39:27.476876 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-17 00:39:27.477361 | orchestrator | Saturday 17 May 2025 00:39:27 +0000 (0:00:00.764) 0:00:18.081 ********** 2025-05-17 00:39:29.899492 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:39:29.899657 | orchestrator | ok: [testbed-manager] 2025-05-17 00:39:29.899794 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:39:29.899812 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:39:29.900174 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:39:29.900466 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:39:29.900794 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:39:29.901351 | orchestrator | 2025-05-17 00:39:29.901897 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:39:29.902282 | orchestrator | 2025-05-17 00:39:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:39:29.902309 | orchestrator | 2025-05-17 00:39:29 | INFO  | Please wait and do not abort execution. 2025-05-17 00:39:29.903581 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-17 00:39:29.903905 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:39:29.904468 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:39:29.904489 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:39:29.904881 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:39:29.904904 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:39:29.905333 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:39:29.906481 | orchestrator | 2025-05-17 00:39:29.907167 | orchestrator | Saturday 17 May 2025 00:39:29 +0000 (0:00:02.422) 0:00:20.504 ********** 2025-05-17 00:39:29.907915 | orchestrator | =============================================================================== 2025-05-17 00:39:29.908364 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.73s 2025-05-17 00:39:29.908998 | orchestrator | Apply netplan configuration --------------------------------------------- 2.45s 2025-05-17 00:39:29.909569 | orchestrator | Install python3-docker -------------------------------------------------- 2.42s 2025-05-17 00:39:29.910648 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.77s 2025-05-17 00:39:29.911036 | orchestrator | Apply netplan configuration --------------------------------------------- 1.75s 2025-05-17 00:39:29.911623 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.62s 2025-05-17 00:39:29.912403 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.54s 2025-05-17 00:39:29.913122 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.48s 2025-05-17 00:39:29.913666 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.44s 2025-05-17 00:39:29.914129 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.76s 2025-05-17 00:39:29.914564 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.73s 2025-05-17 00:39:29.915009 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.67s 2025-05-17 00:39:30.410908 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-17 00:39:31.818981 | orchestrator | 2025-05-17 00:39:31 | INFO  | Task 626d42dc-515a-4d7a-bbd0-5e3921159087 (reboot) was prepared for execution. 2025-05-17 00:39:31.819082 | orchestrator | 2025-05-17 00:39:31 | INFO  | It takes a moment until task 626d42dc-515a-4d7a-bbd0-5e3921159087 (reboot) has been started and output is visible here. 2025-05-17 00:39:34.842905 | orchestrator | 2025-05-17 00:39:34.843477 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-17 00:39:34.843508 | orchestrator | 2025-05-17 00:39:34.843659 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-17 00:39:34.847283 | orchestrator | Saturday 17 May 2025 00:39:34 +0000 (0:00:00.141) 0:00:00.141 ********** 2025-05-17 00:39:34.943330 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:39:34.943681 | orchestrator | 2025-05-17 00:39:34.944193 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-17 00:39:34.946582 | orchestrator | Saturday 17 May 2025 00:39:34 +0000 (0:00:00.104) 0:00:00.245 ********** 2025-05-17 00:39:35.872660 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:39:35.873019 | orchestrator | 2025-05-17 00:39:35.873920 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-17 00:39:35.874444 | orchestrator | Saturday 17 May 2025 00:39:35 +0000 (0:00:00.928) 0:00:01.174 ********** 2025-05-17 00:39:35.981518 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:39:35.983884 | orchestrator | 2025-05-17 00:39:35.983919 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-17 00:39:35.984644 | orchestrator | 2025-05-17 00:39:35.985603 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-17 00:39:35.986694 | orchestrator | Saturday 17 May 2025 00:39:35 +0000 (0:00:00.110) 0:00:01.284 ********** 2025-05-17 00:39:36.074230 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:39:36.074978 | orchestrator | 2025-05-17 00:39:36.075790 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-17 00:39:36.077263 | orchestrator | Saturday 17 May 2025 00:39:36 +0000 (0:00:00.092) 0:00:01.377 ********** 2025-05-17 00:39:36.764552 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:39:36.765993 | orchestrator | 2025-05-17 00:39:36.767444 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-17 00:39:36.768081 | orchestrator | Saturday 17 May 2025 00:39:36 +0000 (0:00:00.689) 0:00:02.067 ********** 2025-05-17 00:39:36.874120 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:39:36.874726 | orchestrator | 2025-05-17 00:39:36.875820 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-17 00:39:36.878129 | orchestrator | 2025-05-17 00:39:36.878998 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-17 00:39:36.879242 | orchestrator | Saturday 17 May 2025 00:39:36 +0000 (0:00:00.108) 0:00:02.175 ********** 2025-05-17 00:39:36.974282 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:39:36.976070 | orchestrator | 2025-05-17 00:39:36.976170 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-17 00:39:36.977679 | orchestrator | Saturday 17 May 2025 00:39:36 +0000 (0:00:00.101) 0:00:02.277 ********** 2025-05-17 00:39:37.731120 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:39:37.731346 | orchestrator | 2025-05-17 00:39:37.732092 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-17 00:39:37.733679 | orchestrator | Saturday 17 May 2025 00:39:37 +0000 (0:00:00.755) 0:00:03.033 ********** 2025-05-17 00:39:37.852781 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:39:37.852992 | orchestrator | 2025-05-17 00:39:37.853549 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-17 00:39:37.854285 | orchestrator | 2025-05-17 00:39:37.854840 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-17 00:39:37.856367 | orchestrator | Saturday 17 May 2025 00:39:37 +0000 (0:00:00.120) 0:00:03.153 ********** 2025-05-17 00:39:37.959813 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:39:37.960694 | orchestrator | 2025-05-17 00:39:37.961583 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-17 00:39:37.962590 | orchestrator | Saturday 17 May 2025 00:39:37 +0000 (0:00:00.107) 0:00:03.261 ********** 2025-05-17 00:39:38.606675 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:39:38.607316 | orchestrator | 2025-05-17 00:39:38.608522 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-17 00:39:38.609917 | orchestrator | Saturday 17 May 2025 00:39:38 +0000 (0:00:00.646) 0:00:03.907 ********** 2025-05-17 00:39:38.714255 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:39:38.714347 | orchestrator | 2025-05-17 00:39:38.714361 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-17 00:39:38.716334 | orchestrator | 2025-05-17 00:39:38.716374 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-17 00:39:38.716387 | orchestrator | Saturday 17 May 2025 00:39:38 +0000 (0:00:00.106) 0:00:04.014 ********** 2025-05-17 00:39:38.813147 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:39:38.813313 | orchestrator | 2025-05-17 00:39:38.814256 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-17 00:39:38.816625 | orchestrator | Saturday 17 May 2025 00:39:38 +0000 (0:00:00.101) 0:00:04.116 ********** 2025-05-17 00:39:39.436355 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:39:39.438936 | orchestrator | 2025-05-17 00:39:39.439638 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-17 00:39:39.440675 | orchestrator | Saturday 17 May 2025 00:39:39 +0000 (0:00:00.620) 0:00:04.737 ********** 2025-05-17 00:39:39.545797 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:39:39.546820 | orchestrator | 2025-05-17 00:39:39.547172 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-17 00:39:39.548622 | orchestrator | 2025-05-17 00:39:39.549461 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-17 00:39:39.550232 | orchestrator | Saturday 17 May 2025 00:39:39 +0000 (0:00:00.109) 0:00:04.846 ********** 2025-05-17 00:39:39.651550 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:39:39.656689 | orchestrator | 2025-05-17 00:39:39.656849 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-17 00:39:39.658465 | orchestrator | Saturday 17 May 2025 00:39:39 +0000 (0:00:00.105) 0:00:04.951 ********** 2025-05-17 00:39:40.337228 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:39:40.337485 | orchestrator | 2025-05-17 00:39:40.338564 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-17 00:39:40.339503 | orchestrator | Saturday 17 May 2025 00:39:40 +0000 (0:00:00.688) 0:00:05.640 ********** 2025-05-17 00:39:40.372987 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:39:40.373197 | orchestrator | 2025-05-17 00:39:40.374280 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:39:40.374620 | orchestrator | 2025-05-17 00:39:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:39:40.374699 | orchestrator | 2025-05-17 00:39:40 | INFO  | Please wait and do not abort execution. 2025-05-17 00:39:40.375448 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:39:40.375594 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:39:40.376389 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:39:40.376804 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:39:40.377425 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:39:40.377674 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:39:40.378103 | orchestrator | 2025-05-17 00:39:40.378290 | orchestrator | Saturday 17 May 2025 00:39:40 +0000 (0:00:00.036) 0:00:05.676 ********** 2025-05-17 00:39:40.378703 | orchestrator | =============================================================================== 2025-05-17 00:39:40.379203 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.33s 2025-05-17 00:39:40.379469 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.61s 2025-05-17 00:39:40.379926 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.59s 2025-05-17 00:39:40.852077 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-17 00:39:42.257199 | orchestrator | 2025-05-17 00:39:42 | INFO  | Task e8b0327b-fc44-4e9a-8048-fa9d03a395e4 (wait-for-connection) was prepared for execution. 2025-05-17 00:39:42.257376 | orchestrator | 2025-05-17 00:39:42 | INFO  | It takes a moment until task e8b0327b-fc44-4e9a-8048-fa9d03a395e4 (wait-for-connection) has been started and output is visible here. 2025-05-17 00:39:45.355948 | orchestrator | 2025-05-17 00:39:45.356252 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-17 00:39:45.357696 | orchestrator | 2025-05-17 00:39:45.358957 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-17 00:39:45.360199 | orchestrator | Saturday 17 May 2025 00:39:45 +0000 (0:00:00.186) 0:00:00.186 ********** 2025-05-17 00:39:58.568068 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:39:58.568191 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:39:58.568207 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:39:58.568219 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:39:58.568299 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:39:58.569480 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:39:58.569683 | orchestrator | 2025-05-17 00:39:58.570010 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:39:58.570398 | orchestrator | 2025-05-17 00:39:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:39:58.570430 | orchestrator | 2025-05-17 00:39:58 | INFO  | Please wait and do not abort execution. 2025-05-17 00:39:58.571280 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:39:58.571811 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:39:58.572440 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:39:58.572860 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:39:58.573221 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:39:58.573651 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:39:58.574168 | orchestrator | 2025-05-17 00:39:58.574673 | orchestrator | Saturday 17 May 2025 00:39:58 +0000 (0:00:13.210) 0:00:13.396 ********** 2025-05-17 00:39:58.575148 | orchestrator | =============================================================================== 2025-05-17 00:39:58.575786 | orchestrator | Wait until remote system is reachable ---------------------------------- 13.21s 2025-05-17 00:39:59.076437 | orchestrator | + osism apply hddtemp 2025-05-17 00:40:00.486699 | orchestrator | 2025-05-17 00:40:00 | INFO  | Task 99c0ea42-a08b-4a5b-9158-57933d922bc4 (hddtemp) was prepared for execution. 2025-05-17 00:40:00.486861 | orchestrator | 2025-05-17 00:40:00 | INFO  | It takes a moment until task 99c0ea42-a08b-4a5b-9158-57933d922bc4 (hddtemp) has been started and output is visible here. 2025-05-17 00:40:03.648595 | orchestrator | 2025-05-17 00:40:03.648808 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-17 00:40:03.649490 | orchestrator | 2025-05-17 00:40:03.650102 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-17 00:40:03.650834 | orchestrator | Saturday 17 May 2025 00:40:03 +0000 (0:00:00.209) 0:00:00.209 ********** 2025-05-17 00:40:03.812627 | orchestrator | ok: [testbed-manager] 2025-05-17 00:40:03.894317 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:40:03.972865 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:40:04.046304 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:40:04.124220 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:40:04.357981 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:40:04.358223 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:40:04.358999 | orchestrator | 2025-05-17 00:40:04.360014 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-17 00:40:04.360344 | orchestrator | Saturday 17 May 2025 00:40:04 +0000 (0:00:00.710) 0:00:00.920 ********** 2025-05-17 00:40:05.514709 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:40:05.515336 | orchestrator | 2025-05-17 00:40:05.518543 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-17 00:40:05.518582 | orchestrator | Saturday 17 May 2025 00:40:05 +0000 (0:00:01.154) 0:00:02.074 ********** 2025-05-17 00:40:07.455245 | orchestrator | ok: [testbed-manager] 2025-05-17 00:40:07.455399 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:40:07.455484 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:40:07.455889 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:40:07.455912 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:40:07.456281 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:40:07.456808 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:40:07.458277 | orchestrator | 2025-05-17 00:40:07.458575 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-17 00:40:07.459920 | orchestrator | Saturday 17 May 2025 00:40:07 +0000 (0:00:01.942) 0:00:04.017 ********** 2025-05-17 00:40:08.085823 | orchestrator | changed: [testbed-manager] 2025-05-17 00:40:08.186798 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:40:08.626568 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:40:08.630847 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:40:08.631186 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:40:08.632229 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:40:08.632709 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:40:08.633573 | orchestrator | 2025-05-17 00:40:08.634112 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-17 00:40:08.634928 | orchestrator | Saturday 17 May 2025 00:40:08 +0000 (0:00:01.168) 0:00:05.185 ********** 2025-05-17 00:40:09.841320 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:40:09.841867 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:40:09.842614 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:40:09.849226 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:40:09.849270 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:40:09.849280 | orchestrator | ok: [testbed-manager] 2025-05-17 00:40:09.849288 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:40:09.849296 | orchestrator | 2025-05-17 00:40:09.849306 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-17 00:40:09.849316 | orchestrator | Saturday 17 May 2025 00:40:09 +0000 (0:00:01.216) 0:00:06.402 ********** 2025-05-17 00:40:10.092711 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:40:10.169567 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:40:10.261641 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:40:10.331343 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:40:10.445436 | orchestrator | changed: [testbed-manager] 2025-05-17 00:40:10.446289 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:40:10.449834 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:40:10.449915 | orchestrator | 2025-05-17 00:40:10.449933 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-17 00:40:10.450006 | orchestrator | Saturday 17 May 2025 00:40:10 +0000 (0:00:00.606) 0:00:07.008 ********** 2025-05-17 00:40:22.798127 | orchestrator | changed: [testbed-manager] 2025-05-17 00:40:22.798246 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:40:22.798261 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:40:22.798272 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:40:22.799416 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:40:22.799954 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:40:22.800711 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:40:22.801118 | orchestrator | 2025-05-17 00:40:22.801738 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-17 00:40:22.802530 | orchestrator | Saturday 17 May 2025 00:40:22 +0000 (0:00:12.344) 0:00:19.353 ********** 2025-05-17 00:40:23.985852 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:40:23.986241 | orchestrator | 2025-05-17 00:40:23.987019 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-17 00:40:23.988067 | orchestrator | Saturday 17 May 2025 00:40:23 +0000 (0:00:01.193) 0:00:20.546 ********** 2025-05-17 00:40:25.838387 | orchestrator | changed: [testbed-manager] 2025-05-17 00:40:25.838495 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:40:25.838625 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:40:25.839276 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:40:25.840721 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:40:25.842531 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:40:25.843518 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:40:25.844224 | orchestrator | 2025-05-17 00:40:25.845689 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:40:25.846315 | orchestrator | 2025-05-17 00:40:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:40:25.846791 | orchestrator | 2025-05-17 00:40:25 | INFO  | Please wait and do not abort execution. 2025-05-17 00:40:25.847963 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:40:25.848300 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-17 00:40:25.849176 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-17 00:40:25.849649 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-17 00:40:25.850296 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-17 00:40:25.850629 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-17 00:40:25.851363 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-17 00:40:25.851874 | orchestrator | 2025-05-17 00:40:25.852249 | orchestrator | Saturday 17 May 2025 00:40:25 +0000 (0:00:01.852) 0:00:22.399 ********** 2025-05-17 00:40:25.852988 | orchestrator | =============================================================================== 2025-05-17 00:40:25.853628 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.34s 2025-05-17 00:40:25.854263 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.94s 2025-05-17 00:40:25.854667 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.85s 2025-05-17 00:40:25.855397 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.22s 2025-05-17 00:40:25.856202 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.19s 2025-05-17 00:40:25.856703 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.17s 2025-05-17 00:40:25.857695 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.15s 2025-05-17 00:40:25.858469 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.71s 2025-05-17 00:40:25.859029 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.61s 2025-05-17 00:40:26.404389 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-17 00:40:28.357789 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-17 00:40:28.357899 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-17 00:40:28.357916 | orchestrator | + local max_attempts=60 2025-05-17 00:40:28.357930 | orchestrator | + local name=ceph-ansible 2025-05-17 00:40:28.357941 | orchestrator | + local attempt_num=1 2025-05-17 00:40:28.358271 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-17 00:40:28.400230 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-17 00:40:28.400299 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-17 00:40:28.400312 | orchestrator | + local max_attempts=60 2025-05-17 00:40:28.400323 | orchestrator | + local name=kolla-ansible 2025-05-17 00:40:28.400334 | orchestrator | + local attempt_num=1 2025-05-17 00:40:28.400851 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-17 00:40:28.427504 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-17 00:40:28.427579 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-17 00:40:28.427595 | orchestrator | + local max_attempts=60 2025-05-17 00:40:28.427607 | orchestrator | + local name=osism-ansible 2025-05-17 00:40:28.427618 | orchestrator | + local attempt_num=1 2025-05-17 00:40:28.428035 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-17 00:40:28.454459 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-17 00:40:28.454560 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-17 00:40:28.454577 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-17 00:40:28.594956 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-17 00:40:28.750877 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-17 00:40:28.905548 | orchestrator | ARA in osism-ansible already disabled. 2025-05-17 00:40:29.078724 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-17 00:40:29.080245 | orchestrator | + osism apply gather-facts 2025-05-17 00:40:30.474676 | orchestrator | 2025-05-17 00:40:30 | INFO  | Task e78cc7eb-b9bf-4df7-a0a5-24c6211ad412 (gather-facts) was prepared for execution. 2025-05-17 00:40:30.474839 | orchestrator | 2025-05-17 00:40:30 | INFO  | It takes a moment until task e78cc7eb-b9bf-4df7-a0a5-24c6211ad412 (gather-facts) has been started and output is visible here. 2025-05-17 00:40:33.495130 | orchestrator | 2025-05-17 00:40:33.495252 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-17 00:40:33.495361 | orchestrator | 2025-05-17 00:40:33.496048 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-17 00:40:33.496161 | orchestrator | Saturday 17 May 2025 00:40:33 +0000 (0:00:00.160) 0:00:00.160 ********** 2025-05-17 00:40:38.285187 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:40:38.286173 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:40:38.286545 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:40:38.290081 | orchestrator | ok: [testbed-manager] 2025-05-17 00:40:38.290976 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:40:38.292342 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:40:38.293651 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:40:38.293699 | orchestrator | 2025-05-17 00:40:38.294299 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-17 00:40:38.295326 | orchestrator | 2025-05-17 00:40:38.295818 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-17 00:40:38.297138 | orchestrator | Saturday 17 May 2025 00:40:38 +0000 (0:00:04.792) 0:00:04.952 ********** 2025-05-17 00:40:38.438708 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:40:38.510258 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:40:38.590658 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:40:38.659289 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:40:38.737497 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:40:38.783439 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:40:38.784232 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:40:38.785291 | orchestrator | 2025-05-17 00:40:38.787448 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:40:38.787515 | orchestrator | 2025-05-17 00:40:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:40:38.787531 | orchestrator | 2025-05-17 00:40:38 | INFO  | Please wait and do not abort execution. 2025-05-17 00:40:38.789938 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-17 00:40:38.790282 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-17 00:40:38.791852 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-17 00:40:38.792855 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-17 00:40:38.793529 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-17 00:40:38.794081 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-17 00:40:38.794929 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-17 00:40:38.795188 | orchestrator | 2025-05-17 00:40:38.796188 | orchestrator | Saturday 17 May 2025 00:40:38 +0000 (0:00:00.498) 0:00:05.451 ********** 2025-05-17 00:40:38.796393 | orchestrator | =============================================================================== 2025-05-17 00:40:38.796980 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.79s 2025-05-17 00:40:38.797470 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-05-17 00:40:39.320201 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-17 00:40:39.340344 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-17 00:40:39.362505 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-17 00:40:39.381922 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-17 00:40:39.399152 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-17 00:40:39.415062 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-17 00:40:39.425711 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-17 00:40:39.442488 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-17 00:40:39.463380 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-17 00:40:39.483861 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-17 00:40:39.501557 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-17 00:40:39.515699 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-17 00:40:39.537866 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-17 00:40:39.558295 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-17 00:40:39.576959 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-17 00:40:39.591175 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-17 00:40:39.604723 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-17 00:40:39.622182 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-17 00:40:39.635301 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-17 00:40:39.648579 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-17 00:40:39.668902 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-17 00:40:40.035442 | orchestrator | ok: Runtime: 0:25:08.582822 2025-05-17 00:40:40.143728 | 2025-05-17 00:40:40.143871 | TASK [Deploy services] 2025-05-17 00:40:40.675880 | orchestrator | skipping: Conditional result was False 2025-05-17 00:40:40.693014 | 2025-05-17 00:40:40.693188 | TASK [Deploy in a nutshell] 2025-05-17 00:40:41.412964 | orchestrator | + set -e 2025-05-17 00:40:41.413220 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-17 00:40:41.413247 | orchestrator | ++ export INTERACTIVE=false 2025-05-17 00:40:41.413303 | orchestrator | ++ INTERACTIVE=false 2025-05-17 00:40:41.413318 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-17 00:40:41.413331 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-17 00:40:41.413398 | orchestrator | + source /opt/manager-vars.sh 2025-05-17 00:40:41.413467 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-17 00:40:41.413498 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-17 00:40:41.413513 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-17 00:40:41.413550 | orchestrator | ++ CEPH_VERSION=reef 2025-05-17 00:40:41.413563 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-17 00:40:41.413603 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-17 00:40:41.413615 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-17 00:40:41.413636 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-17 00:40:41.413647 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-17 00:40:41.413661 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-17 00:40:41.413673 | orchestrator | ++ export ARA=false 2025-05-17 00:40:41.413684 | orchestrator | ++ ARA=false 2025-05-17 00:40:41.413695 | orchestrator | ++ export TEMPEST=false 2025-05-17 00:40:41.413708 | orchestrator | ++ TEMPEST=false 2025-05-17 00:40:41.413719 | orchestrator | ++ export IS_ZUUL=true 2025-05-17 00:40:41.413730 | orchestrator | ++ IS_ZUUL=true 2025-05-17 00:40:41.413741 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2025-05-17 00:40:41.413781 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2025-05-17 00:40:41.413807 | orchestrator | ++ export EXTERNAL_API=false 2025-05-17 00:40:41.413819 | orchestrator | ++ EXTERNAL_API=false 2025-05-17 00:40:41.413830 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-17 00:40:41.413841 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-17 00:40:41.413852 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-17 00:40:41.413863 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-17 00:40:41.413874 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-17 00:40:41.413885 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-17 00:40:41.413896 | orchestrator | + echo 2025-05-17 00:40:41.413908 | orchestrator | 2025-05-17 00:40:41.413919 | orchestrator | # PULL IMAGES 2025-05-17 00:40:41.413930 | orchestrator | 2025-05-17 00:40:41.413941 | orchestrator | + echo '# PULL IMAGES' 2025-05-17 00:40:41.413952 | orchestrator | + echo 2025-05-17 00:40:41.415342 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-17 00:40:41.478626 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-17 00:40:41.478704 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-17 00:40:42.862464 | orchestrator | 2025-05-17 00:40:42 | INFO  | Trying to run play pull-images in environment custom 2025-05-17 00:40:42.909470 | orchestrator | 2025-05-17 00:40:42 | INFO  | Task 72685a4a-dfd4-4ff9-8b8a-75c24297a28a (pull-images) was prepared for execution. 2025-05-17 00:40:42.909509 | orchestrator | 2025-05-17 00:40:42 | INFO  | It takes a moment until task 72685a4a-dfd4-4ff9-8b8a-75c24297a28a (pull-images) has been started and output is visible here. 2025-05-17 00:40:45.845016 | orchestrator | 2025-05-17 00:40:45.845473 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-17 00:40:45.845729 | orchestrator | 2025-05-17 00:40:45.846229 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-17 00:40:45.846985 | orchestrator | Saturday 17 May 2025 00:40:45 +0000 (0:00:00.106) 0:00:00.106 ********** 2025-05-17 00:41:22.340285 | orchestrator | changed: [testbed-manager] 2025-05-17 00:41:22.340489 | orchestrator | 2025-05-17 00:41:22.340508 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-17 00:41:22.340521 | orchestrator | Saturday 17 May 2025 00:41:22 +0000 (0:00:36.492) 0:00:36.599 ********** 2025-05-17 00:42:08.110788 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-17 00:42:08.110959 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-17 00:42:08.110976 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-17 00:42:08.110988 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-17 00:42:08.110999 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-17 00:42:08.111011 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-17 00:42:08.112509 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-17 00:42:08.113636 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-17 00:42:08.114150 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-17 00:42:08.115305 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-17 00:42:08.116752 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-17 00:42:08.116771 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-17 00:42:08.117684 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-17 00:42:08.118365 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-17 00:42:08.118998 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-17 00:42:08.119571 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-17 00:42:08.120046 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-17 00:42:08.120594 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-17 00:42:08.121069 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-17 00:42:08.121500 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-17 00:42:08.121947 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-17 00:42:08.122436 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-17 00:42:08.123110 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-17 00:42:08.123376 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-17 00:42:08.124023 | orchestrator | 2025-05-17 00:42:08.124619 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:42:08.125012 | orchestrator | 2025-05-17 00:42:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:42:08.125035 | orchestrator | 2025-05-17 00:42:08 | INFO  | Please wait and do not abort execution. 2025-05-17 00:42:08.125450 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:42:08.125810 | orchestrator | 2025-05-17 00:42:08.126321 | orchestrator | Saturday 17 May 2025 00:42:08 +0000 (0:00:45.772) 0:01:22.372 ********** 2025-05-17 00:42:08.127036 | orchestrator | =============================================================================== 2025-05-17 00:42:08.127516 | orchestrator | Pull other images ------------------------------------------------------ 45.77s 2025-05-17 00:42:08.127791 | orchestrator | Pull keystone image ---------------------------------------------------- 36.49s 2025-05-17 00:42:10.050742 | orchestrator | 2025-05-17 00:42:10 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-17 00:42:10.097499 | orchestrator | 2025-05-17 00:42:10 | INFO  | Task 0c507480-85b3-400c-9233-55531c0aca0c (wipe-partitions) was prepared for execution. 2025-05-17 00:42:10.097602 | orchestrator | 2025-05-17 00:42:10 | INFO  | It takes a moment until task 0c507480-85b3-400c-9233-55531c0aca0c (wipe-partitions) has been started and output is visible here. 2025-05-17 00:42:13.209486 | orchestrator | 2025-05-17 00:42:13.210396 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-17 00:42:13.210431 | orchestrator | 2025-05-17 00:42:13.210443 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-17 00:42:13.211183 | orchestrator | Saturday 17 May 2025 00:42:13 +0000 (0:00:00.092) 0:00:00.092 ********** 2025-05-17 00:42:13.744222 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:42:13.744346 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:42:13.744363 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:42:13.744387 | orchestrator | 2025-05-17 00:42:13.744510 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-17 00:42:13.746619 | orchestrator | Saturday 17 May 2025 00:42:13 +0000 (0:00:00.536) 0:00:00.628 ********** 2025-05-17 00:42:13.883696 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:13.959406 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:42:13.959482 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:42:13.959490 | orchestrator | 2025-05-17 00:42:13.960344 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-17 00:42:13.960437 | orchestrator | Saturday 17 May 2025 00:42:13 +0000 (0:00:00.213) 0:00:00.842 ********** 2025-05-17 00:42:14.615255 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:42:14.616596 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:42:14.617579 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:42:14.617606 | orchestrator | 2025-05-17 00:42:14.617722 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-17 00:42:14.618472 | orchestrator | Saturday 17 May 2025 00:42:14 +0000 (0:00:00.655) 0:00:01.497 ********** 2025-05-17 00:42:14.767797 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:14.849167 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:42:14.849305 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:42:14.849322 | orchestrator | 2025-05-17 00:42:14.849340 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-17 00:42:14.849353 | orchestrator | Saturday 17 May 2025 00:42:14 +0000 (0:00:00.235) 0:00:01.733 ********** 2025-05-17 00:42:16.076100 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-17 00:42:16.076218 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-17 00:42:16.076229 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-17 00:42:16.076283 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-17 00:42:16.076579 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-17 00:42:16.079452 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-17 00:42:16.079471 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-17 00:42:16.081113 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-17 00:42:16.082516 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-17 00:42:16.082556 | orchestrator | 2025-05-17 00:42:16.082595 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-17 00:42:16.082945 | orchestrator | Saturday 17 May 2025 00:42:16 +0000 (0:00:01.225) 0:00:02.959 ********** 2025-05-17 00:42:17.359103 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-17 00:42:17.359209 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-17 00:42:17.359716 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-17 00:42:17.360373 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-17 00:42:17.360859 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-17 00:42:17.362434 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-17 00:42:17.362803 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-17 00:42:17.366376 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-17 00:42:17.366706 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-17 00:42:17.367167 | orchestrator | 2025-05-17 00:42:17.367441 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-17 00:42:17.368100 | orchestrator | Saturday 17 May 2025 00:42:17 +0000 (0:00:01.283) 0:00:04.242 ********** 2025-05-17 00:42:19.598669 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-17 00:42:19.599101 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-17 00:42:19.603717 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-17 00:42:19.603848 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-17 00:42:19.605061 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-17 00:42:19.607826 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-17 00:42:19.608692 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-17 00:42:19.609043 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-17 00:42:19.611760 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-17 00:42:19.612281 | orchestrator | 2025-05-17 00:42:19.612931 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-17 00:42:19.615774 | orchestrator | Saturday 17 May 2025 00:42:19 +0000 (0:00:02.232) 0:00:06.475 ********** 2025-05-17 00:42:20.183245 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:42:20.183373 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:42:20.183782 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:42:20.183850 | orchestrator | 2025-05-17 00:42:20.183907 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-17 00:42:20.187318 | orchestrator | Saturday 17 May 2025 00:42:20 +0000 (0:00:00.591) 0:00:07.067 ********** 2025-05-17 00:42:20.832124 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:42:20.832703 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:42:20.832727 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:42:20.832736 | orchestrator | 2025-05-17 00:42:20.833600 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:42:20.833638 | orchestrator | 2025-05-17 00:42:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:42:20.833648 | orchestrator | 2025-05-17 00:42:20 | INFO  | Please wait and do not abort execution. 2025-05-17 00:42:20.833885 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:42:20.835014 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:42:20.838586 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:42:20.838718 | orchestrator | 2025-05-17 00:42:20.839104 | orchestrator | Saturday 17 May 2025 00:42:20 +0000 (0:00:00.645) 0:00:07.712 ********** 2025-05-17 00:42:20.839666 | orchestrator | =============================================================================== 2025-05-17 00:42:20.839772 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.23s 2025-05-17 00:42:20.840037 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.28s 2025-05-17 00:42:20.840380 | orchestrator | Check device availability ----------------------------------------------- 1.23s 2025-05-17 00:42:20.840602 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.66s 2025-05-17 00:42:20.840875 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2025-05-17 00:42:20.844732 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2025-05-17 00:42:20.844865 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.54s 2025-05-17 00:42:20.845217 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2025-05-17 00:42:20.845492 | orchestrator | Remove all rook related logical devices --------------------------------- 0.21s 2025-05-17 00:42:22.920595 | orchestrator | 2025-05-17 00:42:22 | INFO  | Task f95e4f01-7295-4802-bb68-15e0c3e7a06d (facts) was prepared for execution. 2025-05-17 00:42:22.920715 | orchestrator | 2025-05-17 00:42:22 | INFO  | It takes a moment until task f95e4f01-7295-4802-bb68-15e0c3e7a06d (facts) has been started and output is visible here. 2025-05-17 00:42:26.179005 | orchestrator | 2025-05-17 00:42:26.179792 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-17 00:42:26.180312 | orchestrator | 2025-05-17 00:42:26.181183 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-17 00:42:26.182132 | orchestrator | Saturday 17 May 2025 00:42:26 +0000 (0:00:00.183) 0:00:00.183 ********** 2025-05-17 00:42:26.638511 | orchestrator | ok: [testbed-manager] 2025-05-17 00:42:27.151628 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:42:27.151991 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:42:27.152397 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:42:27.153424 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:42:27.154376 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:42:27.155863 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:42:27.157686 | orchestrator | 2025-05-17 00:42:27.157736 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-17 00:42:27.158449 | orchestrator | Saturday 17 May 2025 00:42:27 +0000 (0:00:00.970) 0:00:01.153 ********** 2025-05-17 00:42:27.287790 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:42:27.353559 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:42:27.421801 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:42:27.489007 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:42:27.553196 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:28.170796 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:42:28.173606 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:42:28.175033 | orchestrator | 2025-05-17 00:42:28.176008 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-17 00:42:28.176717 | orchestrator | 2025-05-17 00:42:28.177214 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-17 00:42:28.178075 | orchestrator | Saturday 17 May 2025 00:42:28 +0000 (0:00:01.020) 0:00:02.174 ********** 2025-05-17 00:42:32.735475 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:42:32.735621 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:42:32.735639 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:42:32.735898 | orchestrator | ok: [testbed-manager] 2025-05-17 00:42:32.736329 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:42:32.736798 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:42:32.743504 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:42:32.747429 | orchestrator | 2025-05-17 00:42:32.748793 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-17 00:42:32.750319 | orchestrator | 2025-05-17 00:42:32.751203 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-17 00:42:32.753498 | orchestrator | Saturday 17 May 2025 00:42:32 +0000 (0:00:04.565) 0:00:06.740 ********** 2025-05-17 00:42:33.196946 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:42:33.297478 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:42:33.384942 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:42:33.478895 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:42:33.591815 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:33.637046 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:42:33.637259 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:42:33.639786 | orchestrator | 2025-05-17 00:42:33.639872 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:42:33.640629 | orchestrator | 2025-05-17 00:42:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:42:33.640655 | orchestrator | 2025-05-17 00:42:33 | INFO  | Please wait and do not abort execution. 2025-05-17 00:42:33.640937 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:42:33.641194 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:42:33.641755 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:42:33.642244 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:42:33.642343 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:42:33.643058 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:42:33.643080 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:42:33.643448 | orchestrator | 2025-05-17 00:42:33.643700 | orchestrator | Saturday 17 May 2025 00:42:33 +0000 (0:00:00.902) 0:00:07.642 ********** 2025-05-17 00:42:33.644006 | orchestrator | =============================================================================== 2025-05-17 00:42:33.644242 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.57s 2025-05-17 00:42:33.644646 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.02s 2025-05-17 00:42:33.644883 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.97s 2025-05-17 00:42:33.645093 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.90s 2025-05-17 00:42:35.821496 | orchestrator | 2025-05-17 00:42:35 | INFO  | Task c5636999-ca5a-46a2-89bf-71905585d3f4 (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-17 00:42:35.821591 | orchestrator | 2025-05-17 00:42:35 | INFO  | It takes a moment until task c5636999-ca5a-46a2-89bf-71905585d3f4 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-17 00:42:39.595779 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-17 00:42:40.126754 | orchestrator | 2025-05-17 00:42:40.127125 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-17 00:42:40.127833 | orchestrator | 2025-05-17 00:42:40.128334 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-17 00:42:40.129565 | orchestrator | Saturday 17 May 2025 00:42:40 +0000 (0:00:00.449) 0:00:00.449 ********** 2025-05-17 00:42:40.425399 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-17 00:42:40.426740 | orchestrator | 2025-05-17 00:42:40.428760 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-17 00:42:40.429489 | orchestrator | Saturday 17 May 2025 00:42:40 +0000 (0:00:00.298) 0:00:00.748 ********** 2025-05-17 00:42:40.717131 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:42:40.717456 | orchestrator | 2025-05-17 00:42:40.717488 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:40.717549 | orchestrator | Saturday 17 May 2025 00:42:40 +0000 (0:00:00.292) 0:00:01.040 ********** 2025-05-17 00:42:41.205053 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-17 00:42:41.205267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-17 00:42:41.208452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-17 00:42:41.212825 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-17 00:42:41.213765 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-17 00:42:41.214919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-17 00:42:41.215380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-17 00:42:41.216655 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-17 00:42:41.216981 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-17 00:42:41.217449 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-17 00:42:41.218107 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-17 00:42:41.219561 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-17 00:42:41.219897 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-17 00:42:41.220800 | orchestrator | 2025-05-17 00:42:41.220828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:41.221308 | orchestrator | Saturday 17 May 2025 00:42:41 +0000 (0:00:00.486) 0:00:01.526 ********** 2025-05-17 00:42:41.396245 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:41.401348 | orchestrator | 2025-05-17 00:42:41.401495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:41.402921 | orchestrator | Saturday 17 May 2025 00:42:41 +0000 (0:00:00.191) 0:00:01.718 ********** 2025-05-17 00:42:41.583781 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:41.585432 | orchestrator | 2025-05-17 00:42:41.585498 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:41.586199 | orchestrator | Saturday 17 May 2025 00:42:41 +0000 (0:00:00.190) 0:00:01.908 ********** 2025-05-17 00:42:41.775305 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:41.775768 | orchestrator | 2025-05-17 00:42:41.777451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:41.780836 | orchestrator | Saturday 17 May 2025 00:42:41 +0000 (0:00:00.190) 0:00:02.099 ********** 2025-05-17 00:42:41.970196 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:41.970286 | orchestrator | 2025-05-17 00:42:41.970871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:41.970898 | orchestrator | Saturday 17 May 2025 00:42:41 +0000 (0:00:00.194) 0:00:02.294 ********** 2025-05-17 00:42:42.180226 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:42.181305 | orchestrator | 2025-05-17 00:42:42.182357 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:42.183000 | orchestrator | Saturday 17 May 2025 00:42:42 +0000 (0:00:00.211) 0:00:02.505 ********** 2025-05-17 00:42:42.388814 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:42.390909 | orchestrator | 2025-05-17 00:42:42.391729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:42.394164 | orchestrator | Saturday 17 May 2025 00:42:42 +0000 (0:00:00.206) 0:00:02.712 ********** 2025-05-17 00:42:42.598096 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:42.599289 | orchestrator | 2025-05-17 00:42:42.600792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:42.602177 | orchestrator | Saturday 17 May 2025 00:42:42 +0000 (0:00:00.209) 0:00:02.921 ********** 2025-05-17 00:42:42.795402 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:42.796140 | orchestrator | 2025-05-17 00:42:42.796952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:42.797955 | orchestrator | Saturday 17 May 2025 00:42:42 +0000 (0:00:00.197) 0:00:03.119 ********** 2025-05-17 00:42:43.432087 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c) 2025-05-17 00:42:43.432401 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c) 2025-05-17 00:42:43.433678 | orchestrator | 2025-05-17 00:42:43.434419 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:43.437112 | orchestrator | Saturday 17 May 2025 00:42:43 +0000 (0:00:00.636) 0:00:03.755 ********** 2025-05-17 00:42:44.231497 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4c541808-fecb-473a-bfa6-e6107b1a17c0) 2025-05-17 00:42:44.232704 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4c541808-fecb-473a-bfa6-e6107b1a17c0) 2025-05-17 00:42:44.236444 | orchestrator | 2025-05-17 00:42:44.237525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:44.238638 | orchestrator | Saturday 17 May 2025 00:42:44 +0000 (0:00:00.797) 0:00:04.553 ********** 2025-05-17 00:42:44.665689 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0e5716a4-9f06-4595-a8e5-44869be2d3e3) 2025-05-17 00:42:44.669507 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0e5716a4-9f06-4595-a8e5-44869be2d3e3) 2025-05-17 00:42:44.672529 | orchestrator | 2025-05-17 00:42:44.672941 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:44.674012 | orchestrator | Saturday 17 May 2025 00:42:44 +0000 (0:00:00.436) 0:00:04.989 ********** 2025-05-17 00:42:45.066550 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6120ef73-2521-4d83-8ac9-34a2289f978b) 2025-05-17 00:42:45.067047 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6120ef73-2521-4d83-8ac9-34a2289f978b) 2025-05-17 00:42:45.069512 | orchestrator | 2025-05-17 00:42:45.069560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:45.069574 | orchestrator | Saturday 17 May 2025 00:42:45 +0000 (0:00:00.401) 0:00:05.391 ********** 2025-05-17 00:42:45.362515 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-17 00:42:45.362626 | orchestrator | 2025-05-17 00:42:45.362675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:42:45.362697 | orchestrator | Saturday 17 May 2025 00:42:45 +0000 (0:00:00.294) 0:00:05.685 ********** 2025-05-17 00:42:45.728577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-17 00:42:45.728673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-17 00:42:45.728742 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-17 00:42:45.729124 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-17 00:42:45.729657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-17 00:42:45.730082 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-17 00:42:45.731353 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-17 00:42:45.731723 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-17 00:42:45.732546 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-17 00:42:45.732774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-17 00:42:45.733253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-17 00:42:45.733814 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-17 00:42:45.734261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-17 00:42:45.735192 | orchestrator | 2025-05-17 00:42:45.735434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:42:45.735820 | orchestrator | Saturday 17 May 2025 00:42:45 +0000 (0:00:00.364) 0:00:06.050 ********** 2025-05-17 00:42:45.895343 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:45.895472 | orchestrator | 2025-05-17 00:42:45.897971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:42:45.898233 | orchestrator | Saturday 17 May 2025 00:42:45 +0000 (0:00:00.170) 0:00:06.220 ********** 2025-05-17 00:42:46.041622 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:46.042353 | orchestrator | 2025-05-17 00:42:46.042601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:42:46.042939 | orchestrator | Saturday 17 May 2025 00:42:46 +0000 (0:00:00.144) 0:00:06.365 ********** 2025-05-17 00:42:46.201762 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:46.201912 | orchestrator | 2025-05-17 00:42:46.202093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:42:46.202117 | orchestrator | Saturday 17 May 2025 00:42:46 +0000 (0:00:00.158) 0:00:06.524 ********** 2025-05-17 00:42:46.364336 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:46.364426 | orchestrator | 2025-05-17 00:42:46.368983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:42:46.369913 | orchestrator | Saturday 17 May 2025 00:42:46 +0000 (0:00:00.166) 0:00:06.691 ********** 2025-05-17 00:42:46.652145 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:46.654778 | orchestrator | 2025-05-17 00:42:46.655090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:42:46.655191 | orchestrator | Saturday 17 May 2025 00:42:46 +0000 (0:00:00.287) 0:00:06.978 ********** 2025-05-17 00:42:46.838745 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:46.839098 | orchestrator | 2025-05-17 00:42:46.844153 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:42:46.844614 | orchestrator | Saturday 17 May 2025 00:42:46 +0000 (0:00:00.185) 0:00:07.164 ********** 2025-05-17 00:42:47.020217 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:47.020312 | orchestrator | 2025-05-17 00:42:47.022605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:42:47.022709 | orchestrator | Saturday 17 May 2025 00:42:47 +0000 (0:00:00.181) 0:00:07.345 ********** 2025-05-17 00:42:47.222996 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:47.224492 | orchestrator | 2025-05-17 00:42:47.225280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:42:47.227352 | orchestrator | Saturday 17 May 2025 00:42:47 +0000 (0:00:00.203) 0:00:07.549 ********** 2025-05-17 00:42:47.753910 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-17 00:42:47.754003 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-17 00:42:47.754067 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-17 00:42:47.754084 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-17 00:42:47.754098 | orchestrator | 2025-05-17 00:42:47.754108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:42:47.754118 | orchestrator | Saturday 17 May 2025 00:42:47 +0000 (0:00:00.526) 0:00:08.075 ********** 2025-05-17 00:42:47.935907 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:47.936001 | orchestrator | 2025-05-17 00:42:47.936017 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:42:47.936029 | orchestrator | Saturday 17 May 2025 00:42:47 +0000 (0:00:00.180) 0:00:08.256 ********** 2025-05-17 00:42:48.119834 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:48.119970 | orchestrator | 2025-05-17 00:42:48.120409 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:42:48.120434 | orchestrator | Saturday 17 May 2025 00:42:48 +0000 (0:00:00.189) 0:00:08.445 ********** 2025-05-17 00:42:48.266967 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:48.268172 | orchestrator | 2025-05-17 00:42:48.268367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:42:48.270603 | orchestrator | Saturday 17 May 2025 00:42:48 +0000 (0:00:00.147) 0:00:08.593 ********** 2025-05-17 00:42:48.428794 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:48.428982 | orchestrator | 2025-05-17 00:42:48.429053 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-17 00:42:48.429066 | orchestrator | Saturday 17 May 2025 00:42:48 +0000 (0:00:00.159) 0:00:08.752 ********** 2025-05-17 00:42:48.565948 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-17 00:42:48.566543 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-17 00:42:48.566977 | orchestrator | 2025-05-17 00:42:48.567040 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-17 00:42:48.567255 | orchestrator | Saturday 17 May 2025 00:42:48 +0000 (0:00:00.139) 0:00:08.892 ********** 2025-05-17 00:42:48.678367 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:48.678502 | orchestrator | 2025-05-17 00:42:48.678610 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-17 00:42:48.679568 | orchestrator | Saturday 17 May 2025 00:42:48 +0000 (0:00:00.111) 0:00:09.004 ********** 2025-05-17 00:42:48.802308 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:48.802435 | orchestrator | 2025-05-17 00:42:48.802455 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-17 00:42:48.802469 | orchestrator | Saturday 17 May 2025 00:42:48 +0000 (0:00:00.124) 0:00:09.128 ********** 2025-05-17 00:42:49.073391 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:49.073631 | orchestrator | 2025-05-17 00:42:49.074080 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-17 00:42:49.074620 | orchestrator | Saturday 17 May 2025 00:42:49 +0000 (0:00:00.270) 0:00:09.398 ********** 2025-05-17 00:42:49.198255 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:42:49.198836 | orchestrator | 2025-05-17 00:42:49.199741 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-17 00:42:49.200359 | orchestrator | Saturday 17 May 2025 00:42:49 +0000 (0:00:00.124) 0:00:09.523 ********** 2025-05-17 00:42:49.362214 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7dd92559-5dfb-56e9-86ff-64c31a268c5e'}}) 2025-05-17 00:42:49.362338 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '25c991a6-e724-5c1a-b659-154410c60242'}}) 2025-05-17 00:42:49.362417 | orchestrator | 2025-05-17 00:42:49.362434 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-17 00:42:49.364384 | orchestrator | Saturday 17 May 2025 00:42:49 +0000 (0:00:00.163) 0:00:09.686 ********** 2025-05-17 00:42:49.534759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7dd92559-5dfb-56e9-86ff-64c31a268c5e'}})  2025-05-17 00:42:49.535079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '25c991a6-e724-5c1a-b659-154410c60242'}})  2025-05-17 00:42:49.535177 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:49.535523 | orchestrator | 2025-05-17 00:42:49.536646 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-17 00:42:49.536836 | orchestrator | Saturday 17 May 2025 00:42:49 +0000 (0:00:00.174) 0:00:09.860 ********** 2025-05-17 00:42:49.716502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7dd92559-5dfb-56e9-86ff-64c31a268c5e'}})  2025-05-17 00:42:49.716683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '25c991a6-e724-5c1a-b659-154410c60242'}})  2025-05-17 00:42:49.716696 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:49.718369 | orchestrator | 2025-05-17 00:42:49.718434 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-17 00:42:49.718599 | orchestrator | Saturday 17 May 2025 00:42:49 +0000 (0:00:00.179) 0:00:10.040 ********** 2025-05-17 00:42:49.851244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7dd92559-5dfb-56e9-86ff-64c31a268c5e'}})  2025-05-17 00:42:49.851477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '25c991a6-e724-5c1a-b659-154410c60242'}})  2025-05-17 00:42:49.851711 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:49.851928 | orchestrator | 2025-05-17 00:42:49.852479 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-17 00:42:49.852694 | orchestrator | Saturday 17 May 2025 00:42:49 +0000 (0:00:00.137) 0:00:10.177 ********** 2025-05-17 00:42:49.979956 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:42:49.980058 | orchestrator | 2025-05-17 00:42:49.980067 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-17 00:42:49.980126 | orchestrator | Saturday 17 May 2025 00:42:49 +0000 (0:00:00.125) 0:00:10.303 ********** 2025-05-17 00:42:50.104780 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:42:50.105472 | orchestrator | 2025-05-17 00:42:50.105663 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-17 00:42:50.105976 | orchestrator | Saturday 17 May 2025 00:42:50 +0000 (0:00:00.127) 0:00:10.431 ********** 2025-05-17 00:42:50.234972 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:50.237157 | orchestrator | 2025-05-17 00:42:50.239427 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-17 00:42:50.239661 | orchestrator | Saturday 17 May 2025 00:42:50 +0000 (0:00:00.129) 0:00:10.560 ********** 2025-05-17 00:42:50.372594 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:50.372751 | orchestrator | 2025-05-17 00:42:50.372880 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-17 00:42:50.372964 | orchestrator | Saturday 17 May 2025 00:42:50 +0000 (0:00:00.132) 0:00:10.693 ********** 2025-05-17 00:42:50.512114 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:50.512236 | orchestrator | 2025-05-17 00:42:50.512321 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-17 00:42:50.512335 | orchestrator | Saturday 17 May 2025 00:42:50 +0000 (0:00:00.144) 0:00:10.838 ********** 2025-05-17 00:42:50.796382 | orchestrator | ok: [testbed-node-3] => { 2025-05-17 00:42:50.796578 | orchestrator |  "ceph_osd_devices": { 2025-05-17 00:42:50.797588 | orchestrator |  "sdb": { 2025-05-17 00:42:50.798837 | orchestrator |  "osd_lvm_uuid": "7dd92559-5dfb-56e9-86ff-64c31a268c5e" 2025-05-17 00:42:50.802561 | orchestrator |  }, 2025-05-17 00:42:50.803171 | orchestrator |  "sdc": { 2025-05-17 00:42:50.803556 | orchestrator |  "osd_lvm_uuid": "25c991a6-e724-5c1a-b659-154410c60242" 2025-05-17 00:42:50.803623 | orchestrator |  } 2025-05-17 00:42:50.803950 | orchestrator |  } 2025-05-17 00:42:50.804245 | orchestrator | } 2025-05-17 00:42:50.804556 | orchestrator | 2025-05-17 00:42:50.804950 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-17 00:42:50.806359 | orchestrator | Saturday 17 May 2025 00:42:50 +0000 (0:00:00.282) 0:00:11.121 ********** 2025-05-17 00:42:50.909092 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:50.909268 | orchestrator | 2025-05-17 00:42:50.910123 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-17 00:42:50.910211 | orchestrator | Saturday 17 May 2025 00:42:50 +0000 (0:00:00.113) 0:00:11.235 ********** 2025-05-17 00:42:51.026799 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:51.027060 | orchestrator | 2025-05-17 00:42:51.027971 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-17 00:42:51.028117 | orchestrator | Saturday 17 May 2025 00:42:51 +0000 (0:00:00.118) 0:00:11.353 ********** 2025-05-17 00:42:51.149080 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:42:51.149385 | orchestrator | 2025-05-17 00:42:51.149482 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-17 00:42:51.150165 | orchestrator | Saturday 17 May 2025 00:42:51 +0000 (0:00:00.121) 0:00:11.474 ********** 2025-05-17 00:42:51.401885 | orchestrator | changed: [testbed-node-3] => { 2025-05-17 00:42:51.402159 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-17 00:42:51.402628 | orchestrator |  "ceph_osd_devices": { 2025-05-17 00:42:51.404817 | orchestrator |  "sdb": { 2025-05-17 00:42:51.407673 | orchestrator |  "osd_lvm_uuid": "7dd92559-5dfb-56e9-86ff-64c31a268c5e" 2025-05-17 00:42:51.408317 | orchestrator |  }, 2025-05-17 00:42:51.408342 | orchestrator |  "sdc": { 2025-05-17 00:42:51.408733 | orchestrator |  "osd_lvm_uuid": "25c991a6-e724-5c1a-b659-154410c60242" 2025-05-17 00:42:51.408827 | orchestrator |  } 2025-05-17 00:42:51.409285 | orchestrator |  }, 2025-05-17 00:42:51.409447 | orchestrator |  "lvm_volumes": [ 2025-05-17 00:42:51.409865 | orchestrator |  { 2025-05-17 00:42:51.410106 | orchestrator |  "data": "osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e", 2025-05-17 00:42:51.410447 | orchestrator |  "data_vg": "ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e" 2025-05-17 00:42:51.410883 | orchestrator |  }, 2025-05-17 00:42:51.411161 | orchestrator |  { 2025-05-17 00:42:51.411629 | orchestrator |  "data": "osd-block-25c991a6-e724-5c1a-b659-154410c60242", 2025-05-17 00:42:51.411798 | orchestrator |  "data_vg": "ceph-25c991a6-e724-5c1a-b659-154410c60242" 2025-05-17 00:42:51.412269 | orchestrator |  } 2025-05-17 00:42:51.412508 | orchestrator |  ] 2025-05-17 00:42:51.412796 | orchestrator |  } 2025-05-17 00:42:51.413014 | orchestrator | } 2025-05-17 00:42:51.413239 | orchestrator | 2025-05-17 00:42:51.413531 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-17 00:42:51.413733 | orchestrator | Saturday 17 May 2025 00:42:51 +0000 (0:00:00.253) 0:00:11.727 ********** 2025-05-17 00:42:53.354883 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-17 00:42:53.355002 | orchestrator | 2025-05-17 00:42:53.355014 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-17 00:42:53.358204 | orchestrator | 2025-05-17 00:42:53.359313 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-17 00:42:53.360292 | orchestrator | Saturday 17 May 2025 00:42:53 +0000 (0:00:01.947) 0:00:13.675 ********** 2025-05-17 00:42:53.591249 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-17 00:42:53.592333 | orchestrator | 2025-05-17 00:42:53.592554 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-17 00:42:53.594404 | orchestrator | Saturday 17 May 2025 00:42:53 +0000 (0:00:00.240) 0:00:13.916 ********** 2025-05-17 00:42:53.830223 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:42:53.832364 | orchestrator | 2025-05-17 00:42:53.833250 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:53.834606 | orchestrator | Saturday 17 May 2025 00:42:53 +0000 (0:00:00.237) 0:00:14.154 ********** 2025-05-17 00:42:54.217117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-17 00:42:54.218010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-17 00:42:54.219221 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-17 00:42:54.221804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-17 00:42:54.221833 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-17 00:42:54.223502 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-17 00:42:54.226243 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-17 00:42:54.226736 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-17 00:42:54.227151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-17 00:42:54.228132 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-17 00:42:54.228459 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-17 00:42:54.228843 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-17 00:42:54.229438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-17 00:42:54.230975 | orchestrator | 2025-05-17 00:42:54.231114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:54.232005 | orchestrator | Saturday 17 May 2025 00:42:54 +0000 (0:00:00.387) 0:00:14.541 ********** 2025-05-17 00:42:54.433146 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:42:54.433246 | orchestrator | 2025-05-17 00:42:54.433262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:54.433276 | orchestrator | Saturday 17 May 2025 00:42:54 +0000 (0:00:00.215) 0:00:14.756 ********** 2025-05-17 00:42:54.658677 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:42:54.660293 | orchestrator | 2025-05-17 00:42:54.661155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:54.662756 | orchestrator | Saturday 17 May 2025 00:42:54 +0000 (0:00:00.225) 0:00:14.982 ********** 2025-05-17 00:42:54.921497 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:42:54.921621 | orchestrator | 2025-05-17 00:42:54.923776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:54.923806 | orchestrator | Saturday 17 May 2025 00:42:54 +0000 (0:00:00.258) 0:00:15.240 ********** 2025-05-17 00:42:55.173403 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:42:55.173597 | orchestrator | 2025-05-17 00:42:55.173721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:55.174944 | orchestrator | Saturday 17 May 2025 00:42:55 +0000 (0:00:00.253) 0:00:15.494 ********** 2025-05-17 00:42:55.988119 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:42:55.993820 | orchestrator | 2025-05-17 00:42:55.994230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:55.995603 | orchestrator | Saturday 17 May 2025 00:42:55 +0000 (0:00:00.816) 0:00:16.310 ********** 2025-05-17 00:42:56.224718 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:42:56.224926 | orchestrator | 2025-05-17 00:42:56.225022 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:56.227139 | orchestrator | Saturday 17 May 2025 00:42:56 +0000 (0:00:00.235) 0:00:16.546 ********** 2025-05-17 00:42:56.453095 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:42:56.455811 | orchestrator | 2025-05-17 00:42:56.455934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:56.455951 | orchestrator | Saturday 17 May 2025 00:42:56 +0000 (0:00:00.229) 0:00:16.776 ********** 2025-05-17 00:42:56.685172 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:42:56.685325 | orchestrator | 2025-05-17 00:42:56.685730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:56.685755 | orchestrator | Saturday 17 May 2025 00:42:56 +0000 (0:00:00.234) 0:00:17.010 ********** 2025-05-17 00:42:57.178994 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee) 2025-05-17 00:42:57.179093 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee) 2025-05-17 00:42:57.179647 | orchestrator | 2025-05-17 00:42:57.180230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:57.180449 | orchestrator | Saturday 17 May 2025 00:42:57 +0000 (0:00:00.491) 0:00:17.502 ********** 2025-05-17 00:42:57.741346 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6fc6848d-5127-4f65-b412-e829995e25e7) 2025-05-17 00:42:57.741433 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6fc6848d-5127-4f65-b412-e829995e25e7) 2025-05-17 00:42:57.741762 | orchestrator | 2025-05-17 00:42:57.742843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:57.743441 | orchestrator | Saturday 17 May 2025 00:42:57 +0000 (0:00:00.563) 0:00:18.065 ********** 2025-05-17 00:42:58.375115 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e3068b10-d912-449c-8868-8ffe0bc578f0) 2025-05-17 00:42:58.377574 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e3068b10-d912-449c-8868-8ffe0bc578f0) 2025-05-17 00:42:58.377614 | orchestrator | 2025-05-17 00:42:58.378538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:58.378570 | orchestrator | Saturday 17 May 2025 00:42:58 +0000 (0:00:00.630) 0:00:18.696 ********** 2025-05-17 00:42:58.905923 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bec56d32-b1fb-48c0-a20f-a6daa2f9686d) 2025-05-17 00:42:58.907476 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bec56d32-b1fb-48c0-a20f-a6daa2f9686d) 2025-05-17 00:42:58.909824 | orchestrator | 2025-05-17 00:42:58.910147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:42:58.910847 | orchestrator | Saturday 17 May 2025 00:42:58 +0000 (0:00:00.531) 0:00:19.228 ********** 2025-05-17 00:42:59.240676 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-17 00:42:59.241691 | orchestrator | 2025-05-17 00:42:59.243999 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:42:59.244573 | orchestrator | Saturday 17 May 2025 00:42:59 +0000 (0:00:00.336) 0:00:19.564 ********** 2025-05-17 00:42:59.801104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-17 00:42:59.802133 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-17 00:42:59.803459 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-17 00:42:59.804560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-17 00:42:59.805012 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-17 00:42:59.805985 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-17 00:42:59.809756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-17 00:42:59.810434 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-17 00:42:59.811128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-17 00:42:59.811731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-17 00:42:59.812464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-17 00:42:59.813178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-17 00:42:59.814296 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-17 00:42:59.814414 | orchestrator | 2025-05-17 00:42:59.816808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:42:59.816954 | orchestrator | Saturday 17 May 2025 00:42:59 +0000 (0:00:00.561) 0:00:20.126 ********** 2025-05-17 00:43:00.015983 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:00.017471 | orchestrator | 2025-05-17 00:43:00.022401 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:00.023498 | orchestrator | Saturday 17 May 2025 00:43:00 +0000 (0:00:00.212) 0:00:20.338 ********** 2025-05-17 00:43:00.226317 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:00.227626 | orchestrator | 2025-05-17 00:43:00.227819 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:00.229529 | orchestrator | Saturday 17 May 2025 00:43:00 +0000 (0:00:00.211) 0:00:20.550 ********** 2025-05-17 00:43:00.453728 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:00.457633 | orchestrator | 2025-05-17 00:43:00.457734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:00.458006 | orchestrator | Saturday 17 May 2025 00:43:00 +0000 (0:00:00.221) 0:00:20.771 ********** 2025-05-17 00:43:00.665410 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:00.665515 | orchestrator | 2025-05-17 00:43:00.665530 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:00.665597 | orchestrator | Saturday 17 May 2025 00:43:00 +0000 (0:00:00.215) 0:00:20.987 ********** 2025-05-17 00:43:00.888190 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:00.890943 | orchestrator | 2025-05-17 00:43:00.890988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:00.891003 | orchestrator | Saturday 17 May 2025 00:43:00 +0000 (0:00:00.226) 0:00:21.213 ********** 2025-05-17 00:43:01.081718 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:01.082579 | orchestrator | 2025-05-17 00:43:01.086406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:01.086703 | orchestrator | Saturday 17 May 2025 00:43:01 +0000 (0:00:00.191) 0:00:21.405 ********** 2025-05-17 00:43:01.274088 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:01.274500 | orchestrator | 2025-05-17 00:43:01.275455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:01.276582 | orchestrator | Saturday 17 May 2025 00:43:01 +0000 (0:00:00.191) 0:00:21.597 ********** 2025-05-17 00:43:01.492900 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:01.493045 | orchestrator | 2025-05-17 00:43:01.494166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:01.495359 | orchestrator | Saturday 17 May 2025 00:43:01 +0000 (0:00:00.216) 0:00:21.814 ********** 2025-05-17 00:43:02.330337 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-17 00:43:02.330654 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-17 00:43:02.332520 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-17 00:43:02.333831 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-17 00:43:02.335052 | orchestrator | 2025-05-17 00:43:02.336041 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:02.337311 | orchestrator | Saturday 17 May 2025 00:43:02 +0000 (0:00:00.838) 0:00:22.653 ********** 2025-05-17 00:43:02.527124 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:02.528686 | orchestrator | 2025-05-17 00:43:02.529130 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:02.530167 | orchestrator | Saturday 17 May 2025 00:43:02 +0000 (0:00:00.197) 0:00:22.851 ********** 2025-05-17 00:43:03.173021 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:03.174972 | orchestrator | 2025-05-17 00:43:03.176384 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:03.177112 | orchestrator | Saturday 17 May 2025 00:43:03 +0000 (0:00:00.643) 0:00:23.495 ********** 2025-05-17 00:43:03.380992 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:03.381083 | orchestrator | 2025-05-17 00:43:03.381920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:03.383141 | orchestrator | Saturday 17 May 2025 00:43:03 +0000 (0:00:00.209) 0:00:23.704 ********** 2025-05-17 00:43:03.590546 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:03.591399 | orchestrator | 2025-05-17 00:43:03.592238 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-17 00:43:03.592996 | orchestrator | Saturday 17 May 2025 00:43:03 +0000 (0:00:00.210) 0:00:23.915 ********** 2025-05-17 00:43:03.803915 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-17 00:43:03.805216 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-17 00:43:03.809609 | orchestrator | 2025-05-17 00:43:03.809644 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-17 00:43:03.809653 | orchestrator | Saturday 17 May 2025 00:43:03 +0000 (0:00:00.212) 0:00:24.128 ********** 2025-05-17 00:43:03.934505 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:03.935375 | orchestrator | 2025-05-17 00:43:03.936569 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-17 00:43:03.937339 | orchestrator | Saturday 17 May 2025 00:43:03 +0000 (0:00:00.130) 0:00:24.259 ********** 2025-05-17 00:43:04.075183 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:04.075282 | orchestrator | 2025-05-17 00:43:04.075318 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-17 00:43:04.076438 | orchestrator | Saturday 17 May 2025 00:43:04 +0000 (0:00:00.135) 0:00:24.394 ********** 2025-05-17 00:43:04.205012 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:04.206180 | orchestrator | 2025-05-17 00:43:04.207471 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-17 00:43:04.208383 | orchestrator | Saturday 17 May 2025 00:43:04 +0000 (0:00:00.135) 0:00:24.529 ********** 2025-05-17 00:43:04.360109 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:43:04.360311 | orchestrator | 2025-05-17 00:43:04.360899 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-17 00:43:04.362075 | orchestrator | Saturday 17 May 2025 00:43:04 +0000 (0:00:00.155) 0:00:24.684 ********** 2025-05-17 00:43:04.583695 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93bb0954-6685-5c67-a7e0-a3574f092206'}}) 2025-05-17 00:43:04.584318 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e21dde7b-e402-5316-8511-fd8df0cc7e38'}}) 2025-05-17 00:43:04.585471 | orchestrator | 2025-05-17 00:43:04.586320 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-17 00:43:04.587164 | orchestrator | Saturday 17 May 2025 00:43:04 +0000 (0:00:00.221) 0:00:24.906 ********** 2025-05-17 00:43:04.756636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93bb0954-6685-5c67-a7e0-a3574f092206'}})  2025-05-17 00:43:04.757158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e21dde7b-e402-5316-8511-fd8df0cc7e38'}})  2025-05-17 00:43:04.757915 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:04.760413 | orchestrator | 2025-05-17 00:43:04.760511 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-17 00:43:04.760527 | orchestrator | Saturday 17 May 2025 00:43:04 +0000 (0:00:00.174) 0:00:25.080 ********** 2025-05-17 00:43:04.918335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93bb0954-6685-5c67-a7e0-a3574f092206'}})  2025-05-17 00:43:04.918774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e21dde7b-e402-5316-8511-fd8df0cc7e38'}})  2025-05-17 00:43:04.919733 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:04.920712 | orchestrator | 2025-05-17 00:43:04.921550 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-17 00:43:04.924151 | orchestrator | Saturday 17 May 2025 00:43:04 +0000 (0:00:00.162) 0:00:25.242 ********** 2025-05-17 00:43:05.287692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93bb0954-6685-5c67-a7e0-a3574f092206'}})  2025-05-17 00:43:05.287941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e21dde7b-e402-5316-8511-fd8df0cc7e38'}})  2025-05-17 00:43:05.288435 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:05.288623 | orchestrator | 2025-05-17 00:43:05.289050 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-17 00:43:05.292491 | orchestrator | Saturday 17 May 2025 00:43:05 +0000 (0:00:00.366) 0:00:25.609 ********** 2025-05-17 00:43:05.415367 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:43:05.415909 | orchestrator | 2025-05-17 00:43:05.416277 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-17 00:43:05.416893 | orchestrator | Saturday 17 May 2025 00:43:05 +0000 (0:00:00.130) 0:00:25.740 ********** 2025-05-17 00:43:05.560422 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:43:05.561064 | orchestrator | 2025-05-17 00:43:05.561745 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-17 00:43:05.562530 | orchestrator | Saturday 17 May 2025 00:43:05 +0000 (0:00:00.144) 0:00:25.885 ********** 2025-05-17 00:43:05.706755 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:05.706924 | orchestrator | 2025-05-17 00:43:05.707101 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-17 00:43:05.707572 | orchestrator | Saturday 17 May 2025 00:43:05 +0000 (0:00:00.144) 0:00:26.030 ********** 2025-05-17 00:43:05.835636 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:05.836764 | orchestrator | 2025-05-17 00:43:05.837823 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-17 00:43:05.840269 | orchestrator | Saturday 17 May 2025 00:43:05 +0000 (0:00:00.129) 0:00:26.160 ********** 2025-05-17 00:43:05.959304 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:05.963394 | orchestrator | 2025-05-17 00:43:05.965104 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-17 00:43:05.965163 | orchestrator | Saturday 17 May 2025 00:43:05 +0000 (0:00:00.123) 0:00:26.284 ********** 2025-05-17 00:43:06.124258 | orchestrator | ok: [testbed-node-4] => { 2025-05-17 00:43:06.124354 | orchestrator |  "ceph_osd_devices": { 2025-05-17 00:43:06.124441 | orchestrator |  "sdb": { 2025-05-17 00:43:06.124681 | orchestrator |  "osd_lvm_uuid": "93bb0954-6685-5c67-a7e0-a3574f092206" 2025-05-17 00:43:06.125493 | orchestrator |  }, 2025-05-17 00:43:06.125569 | orchestrator |  "sdc": { 2025-05-17 00:43:06.126398 | orchestrator |  "osd_lvm_uuid": "e21dde7b-e402-5316-8511-fd8df0cc7e38" 2025-05-17 00:43:06.129076 | orchestrator |  } 2025-05-17 00:43:06.129101 | orchestrator |  } 2025-05-17 00:43:06.129112 | orchestrator | } 2025-05-17 00:43:06.129121 | orchestrator | 2025-05-17 00:43:06.130275 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-17 00:43:06.130681 | orchestrator | Saturday 17 May 2025 00:43:06 +0000 (0:00:00.163) 0:00:26.448 ********** 2025-05-17 00:43:06.249066 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:06.249997 | orchestrator | 2025-05-17 00:43:06.253463 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-17 00:43:06.253502 | orchestrator | Saturday 17 May 2025 00:43:06 +0000 (0:00:00.125) 0:00:26.573 ********** 2025-05-17 00:43:06.382942 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:06.387031 | orchestrator | 2025-05-17 00:43:06.388065 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-17 00:43:06.388401 | orchestrator | Saturday 17 May 2025 00:43:06 +0000 (0:00:00.133) 0:00:26.707 ********** 2025-05-17 00:43:06.541850 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:43:06.542557 | orchestrator | 2025-05-17 00:43:06.545477 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-17 00:43:06.545774 | orchestrator | Saturday 17 May 2025 00:43:06 +0000 (0:00:00.158) 0:00:26.865 ********** 2025-05-17 00:43:07.020013 | orchestrator | changed: [testbed-node-4] => { 2025-05-17 00:43:07.021213 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-17 00:43:07.022531 | orchestrator |  "ceph_osd_devices": { 2025-05-17 00:43:07.023583 | orchestrator |  "sdb": { 2025-05-17 00:43:07.025299 | orchestrator |  "osd_lvm_uuid": "93bb0954-6685-5c67-a7e0-a3574f092206" 2025-05-17 00:43:07.026066 | orchestrator |  }, 2025-05-17 00:43:07.026432 | orchestrator |  "sdc": { 2025-05-17 00:43:07.026736 | orchestrator |  "osd_lvm_uuid": "e21dde7b-e402-5316-8511-fd8df0cc7e38" 2025-05-17 00:43:07.027598 | orchestrator |  } 2025-05-17 00:43:07.028118 | orchestrator |  }, 2025-05-17 00:43:07.029678 | orchestrator |  "lvm_volumes": [ 2025-05-17 00:43:07.029986 | orchestrator |  { 2025-05-17 00:43:07.030358 | orchestrator |  "data": "osd-block-93bb0954-6685-5c67-a7e0-a3574f092206", 2025-05-17 00:43:07.030780 | orchestrator |  "data_vg": "ceph-93bb0954-6685-5c67-a7e0-a3574f092206" 2025-05-17 00:43:07.031458 | orchestrator |  }, 2025-05-17 00:43:07.031731 | orchestrator |  { 2025-05-17 00:43:07.032150 | orchestrator |  "data": "osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38", 2025-05-17 00:43:07.032562 | orchestrator |  "data_vg": "ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38" 2025-05-17 00:43:07.032953 | orchestrator |  } 2025-05-17 00:43:07.033374 | orchestrator |  ] 2025-05-17 00:43:07.033761 | orchestrator |  } 2025-05-17 00:43:07.034449 | orchestrator | } 2025-05-17 00:43:07.034788 | orchestrator | 2025-05-17 00:43:07.035341 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-17 00:43:07.035811 | orchestrator | Saturday 17 May 2025 00:43:07 +0000 (0:00:00.473) 0:00:27.339 ********** 2025-05-17 00:43:08.379393 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-17 00:43:08.379480 | orchestrator | 2025-05-17 00:43:08.379490 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-17 00:43:08.379497 | orchestrator | 2025-05-17 00:43:08.379505 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-17 00:43:08.379665 | orchestrator | Saturday 17 May 2025 00:43:08 +0000 (0:00:01.360) 0:00:28.700 ********** 2025-05-17 00:43:08.621394 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-17 00:43:08.621493 | orchestrator | 2025-05-17 00:43:08.622162 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-17 00:43:08.622593 | orchestrator | Saturday 17 May 2025 00:43:08 +0000 (0:00:00.245) 0:00:28.945 ********** 2025-05-17 00:43:08.846550 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:43:08.869177 | orchestrator | 2025-05-17 00:43:08.869272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:43:08.869294 | orchestrator | Saturday 17 May 2025 00:43:08 +0000 (0:00:00.225) 0:00:29.171 ********** 2025-05-17 00:43:09.371349 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-17 00:43:09.371518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-17 00:43:09.372394 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-17 00:43:09.374594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-17 00:43:09.375974 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-17 00:43:09.378176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-17 00:43:09.378955 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-17 00:43:09.379366 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-17 00:43:09.379749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-17 00:43:09.380588 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-17 00:43:09.380939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-17 00:43:09.381808 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-17 00:43:09.382125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-17 00:43:09.382415 | orchestrator | 2025-05-17 00:43:09.382882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:43:09.383073 | orchestrator | Saturday 17 May 2025 00:43:09 +0000 (0:00:00.520) 0:00:29.692 ********** 2025-05-17 00:43:09.575060 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:09.575977 | orchestrator | 2025-05-17 00:43:09.577027 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:43:09.577818 | orchestrator | Saturday 17 May 2025 00:43:09 +0000 (0:00:00.206) 0:00:29.899 ********** 2025-05-17 00:43:09.778465 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:09.779509 | orchestrator | 2025-05-17 00:43:09.780063 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:43:09.782625 | orchestrator | Saturday 17 May 2025 00:43:09 +0000 (0:00:00.202) 0:00:30.102 ********** 2025-05-17 00:43:09.968297 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:09.969767 | orchestrator | 2025-05-17 00:43:09.971224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:43:09.974650 | orchestrator | Saturday 17 May 2025 00:43:09 +0000 (0:00:00.190) 0:00:30.292 ********** 2025-05-17 00:43:10.188188 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:10.189422 | orchestrator | 2025-05-17 00:43:10.191663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:43:10.192410 | orchestrator | Saturday 17 May 2025 00:43:10 +0000 (0:00:00.218) 0:00:30.511 ********** 2025-05-17 00:43:10.379178 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:10.379360 | orchestrator | 2025-05-17 00:43:10.380558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:43:10.382131 | orchestrator | Saturday 17 May 2025 00:43:10 +0000 (0:00:00.191) 0:00:30.703 ********** 2025-05-17 00:43:10.599441 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:10.599537 | orchestrator | 2025-05-17 00:43:10.600108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:43:10.601371 | orchestrator | Saturday 17 May 2025 00:43:10 +0000 (0:00:00.218) 0:00:30.922 ********** 2025-05-17 00:43:10.799081 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:10.799619 | orchestrator | 2025-05-17 00:43:10.802601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:43:10.803321 | orchestrator | Saturday 17 May 2025 00:43:10 +0000 (0:00:00.200) 0:00:31.122 ********** 2025-05-17 00:43:11.002344 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:11.002442 | orchestrator | 2025-05-17 00:43:11.002459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:43:11.002472 | orchestrator | Saturday 17 May 2025 00:43:10 +0000 (0:00:00.204) 0:00:31.326 ********** 2025-05-17 00:43:11.603849 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50) 2025-05-17 00:43:11.603960 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50) 2025-05-17 00:43:11.604481 | orchestrator | 2025-05-17 00:43:11.605077 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:43:11.606169 | orchestrator | Saturday 17 May 2025 00:43:11 +0000 (0:00:00.599) 0:00:31.926 ********** 2025-05-17 00:43:12.379416 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4ddb2821-e209-41e3-b031-9f23c5adf4cf) 2025-05-17 00:43:12.379582 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4ddb2821-e209-41e3-b031-9f23c5adf4cf) 2025-05-17 00:43:12.379708 | orchestrator | 2025-05-17 00:43:12.379993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:43:12.380628 | orchestrator | Saturday 17 May 2025 00:43:12 +0000 (0:00:00.776) 0:00:32.702 ********** 2025-05-17 00:43:12.799743 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8746963d-35d6-4275-a53f-fa471798b09a) 2025-05-17 00:43:12.800474 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8746963d-35d6-4275-a53f-fa471798b09a) 2025-05-17 00:43:12.801850 | orchestrator | 2025-05-17 00:43:12.802287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:43:12.804021 | orchestrator | Saturday 17 May 2025 00:43:12 +0000 (0:00:00.421) 0:00:33.124 ********** 2025-05-17 00:43:13.215554 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c9243530-1d89-4c38-b4ef-a9d7ed453cca) 2025-05-17 00:43:13.215714 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c9243530-1d89-4c38-b4ef-a9d7ed453cca) 2025-05-17 00:43:13.216175 | orchestrator | 2025-05-17 00:43:13.216793 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:43:13.219238 | orchestrator | Saturday 17 May 2025 00:43:13 +0000 (0:00:00.414) 0:00:33.539 ********** 2025-05-17 00:43:13.542204 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-17 00:43:13.542302 | orchestrator | 2025-05-17 00:43:13.543024 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:13.545127 | orchestrator | Saturday 17 May 2025 00:43:13 +0000 (0:00:00.323) 0:00:33.862 ********** 2025-05-17 00:43:13.935169 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-17 00:43:13.935914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-17 00:43:13.936605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-17 00:43:13.937587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-17 00:43:13.938852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-17 00:43:13.938927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-17 00:43:13.939959 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-17 00:43:13.940679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-17 00:43:13.941503 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-17 00:43:13.942311 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-17 00:43:13.942609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-17 00:43:13.943541 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-17 00:43:13.944960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-17 00:43:13.945733 | orchestrator | 2025-05-17 00:43:13.946499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:13.947293 | orchestrator | Saturday 17 May 2025 00:43:13 +0000 (0:00:00.397) 0:00:34.260 ********** 2025-05-17 00:43:14.132340 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:14.132950 | orchestrator | 2025-05-17 00:43:14.133692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:14.136768 | orchestrator | Saturday 17 May 2025 00:43:14 +0000 (0:00:00.195) 0:00:34.456 ********** 2025-05-17 00:43:14.359107 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:14.359650 | orchestrator | 2025-05-17 00:43:14.360499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:14.361452 | orchestrator | Saturday 17 May 2025 00:43:14 +0000 (0:00:00.227) 0:00:34.683 ********** 2025-05-17 00:43:14.555895 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:14.557184 | orchestrator | 2025-05-17 00:43:14.557363 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:14.559569 | orchestrator | Saturday 17 May 2025 00:43:14 +0000 (0:00:00.196) 0:00:34.879 ********** 2025-05-17 00:43:14.757736 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:14.758088 | orchestrator | 2025-05-17 00:43:14.758548 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:14.759585 | orchestrator | Saturday 17 May 2025 00:43:14 +0000 (0:00:00.202) 0:00:35.082 ********** 2025-05-17 00:43:14.943268 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:14.943922 | orchestrator | 2025-05-17 00:43:14.944935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:14.945649 | orchestrator | Saturday 17 May 2025 00:43:14 +0000 (0:00:00.185) 0:00:35.267 ********** 2025-05-17 00:43:15.510941 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:15.511095 | orchestrator | 2025-05-17 00:43:15.514166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:15.514189 | orchestrator | Saturday 17 May 2025 00:43:15 +0000 (0:00:00.566) 0:00:35.834 ********** 2025-05-17 00:43:15.703754 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:15.704460 | orchestrator | 2025-05-17 00:43:15.705440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:15.706365 | orchestrator | Saturday 17 May 2025 00:43:15 +0000 (0:00:00.193) 0:00:36.028 ********** 2025-05-17 00:43:15.892633 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:15.893597 | orchestrator | 2025-05-17 00:43:15.894436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:15.895087 | orchestrator | Saturday 17 May 2025 00:43:15 +0000 (0:00:00.188) 0:00:36.216 ********** 2025-05-17 00:43:16.513323 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-17 00:43:16.513487 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-17 00:43:16.514246 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-17 00:43:16.515023 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-17 00:43:16.516717 | orchestrator | 2025-05-17 00:43:16.518930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:16.519264 | orchestrator | Saturday 17 May 2025 00:43:16 +0000 (0:00:00.619) 0:00:36.835 ********** 2025-05-17 00:43:16.725700 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:16.726730 | orchestrator | 2025-05-17 00:43:16.726770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:16.727627 | orchestrator | Saturday 17 May 2025 00:43:16 +0000 (0:00:00.209) 0:00:37.045 ********** 2025-05-17 00:43:16.907011 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:16.907691 | orchestrator | 2025-05-17 00:43:16.908417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:16.911421 | orchestrator | Saturday 17 May 2025 00:43:16 +0000 (0:00:00.184) 0:00:37.230 ********** 2025-05-17 00:43:17.104061 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:17.104679 | orchestrator | 2025-05-17 00:43:17.105626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:43:17.106557 | orchestrator | Saturday 17 May 2025 00:43:17 +0000 (0:00:00.197) 0:00:37.428 ********** 2025-05-17 00:43:17.300478 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:17.300569 | orchestrator | 2025-05-17 00:43:17.300674 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-17 00:43:17.300693 | orchestrator | Saturday 17 May 2025 00:43:17 +0000 (0:00:00.195) 0:00:37.623 ********** 2025-05-17 00:43:17.466776 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-17 00:43:17.467446 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-17 00:43:17.467941 | orchestrator | 2025-05-17 00:43:17.469651 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-17 00:43:17.471427 | orchestrator | Saturday 17 May 2025 00:43:17 +0000 (0:00:00.167) 0:00:37.791 ********** 2025-05-17 00:43:17.599638 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:17.600313 | orchestrator | 2025-05-17 00:43:17.601267 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-17 00:43:17.603465 | orchestrator | Saturday 17 May 2025 00:43:17 +0000 (0:00:00.132) 0:00:37.923 ********** 2025-05-17 00:43:17.916844 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:17.917546 | orchestrator | 2025-05-17 00:43:17.918754 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-17 00:43:17.921003 | orchestrator | Saturday 17 May 2025 00:43:17 +0000 (0:00:00.317) 0:00:38.240 ********** 2025-05-17 00:43:18.044807 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:18.045713 | orchestrator | 2025-05-17 00:43:18.046919 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-17 00:43:18.047840 | orchestrator | Saturday 17 May 2025 00:43:18 +0000 (0:00:00.127) 0:00:38.368 ********** 2025-05-17 00:43:18.187267 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:43:18.187819 | orchestrator | 2025-05-17 00:43:18.188328 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-17 00:43:18.189350 | orchestrator | Saturday 17 May 2025 00:43:18 +0000 (0:00:00.143) 0:00:38.512 ********** 2025-05-17 00:43:18.391203 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a83a275b-240b-53eb-892d-9c3e23ab252d'}}) 2025-05-17 00:43:18.392047 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b4d5f2e3-0e32-57e8-8b55-58d04db15593'}}) 2025-05-17 00:43:18.393363 | orchestrator | 2025-05-17 00:43:18.394234 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-17 00:43:18.395034 | orchestrator | Saturday 17 May 2025 00:43:18 +0000 (0:00:00.202) 0:00:38.714 ********** 2025-05-17 00:43:18.553559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a83a275b-240b-53eb-892d-9c3e23ab252d'}})  2025-05-17 00:43:18.553990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b4d5f2e3-0e32-57e8-8b55-58d04db15593'}})  2025-05-17 00:43:18.554758 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:18.555433 | orchestrator | 2025-05-17 00:43:18.555936 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-17 00:43:18.556665 | orchestrator | Saturday 17 May 2025 00:43:18 +0000 (0:00:00.164) 0:00:38.878 ********** 2025-05-17 00:43:18.732350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a83a275b-240b-53eb-892d-9c3e23ab252d'}})  2025-05-17 00:43:18.733774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b4d5f2e3-0e32-57e8-8b55-58d04db15593'}})  2025-05-17 00:43:18.734349 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:18.734778 | orchestrator | 2025-05-17 00:43:18.735330 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-17 00:43:18.735701 | orchestrator | Saturday 17 May 2025 00:43:18 +0000 (0:00:00.177) 0:00:39.056 ********** 2025-05-17 00:43:18.902983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a83a275b-240b-53eb-892d-9c3e23ab252d'}})  2025-05-17 00:43:18.903789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b4d5f2e3-0e32-57e8-8b55-58d04db15593'}})  2025-05-17 00:43:18.906141 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:18.906605 | orchestrator | 2025-05-17 00:43:18.907925 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-17 00:43:18.909300 | orchestrator | Saturday 17 May 2025 00:43:18 +0000 (0:00:00.170) 0:00:39.226 ********** 2025-05-17 00:43:19.058227 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:43:19.058566 | orchestrator | 2025-05-17 00:43:19.059158 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-17 00:43:19.059475 | orchestrator | Saturday 17 May 2025 00:43:19 +0000 (0:00:00.155) 0:00:39.382 ********** 2025-05-17 00:43:19.217840 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:43:19.218401 | orchestrator | 2025-05-17 00:43:19.219154 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-17 00:43:19.221171 | orchestrator | Saturday 17 May 2025 00:43:19 +0000 (0:00:00.158) 0:00:39.541 ********** 2025-05-17 00:43:19.351365 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:19.356116 | orchestrator | 2025-05-17 00:43:19.356933 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-17 00:43:19.357832 | orchestrator | Saturday 17 May 2025 00:43:19 +0000 (0:00:00.132) 0:00:39.673 ********** 2025-05-17 00:43:19.475569 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:19.476147 | orchestrator | 2025-05-17 00:43:19.477541 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-17 00:43:19.478545 | orchestrator | Saturday 17 May 2025 00:43:19 +0000 (0:00:00.126) 0:00:39.800 ********** 2025-05-17 00:43:19.609704 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:19.611022 | orchestrator | 2025-05-17 00:43:19.612359 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-17 00:43:19.613386 | orchestrator | Saturday 17 May 2025 00:43:19 +0000 (0:00:00.133) 0:00:39.933 ********** 2025-05-17 00:43:19.958327 | orchestrator | ok: [testbed-node-5] => { 2025-05-17 00:43:19.959182 | orchestrator |  "ceph_osd_devices": { 2025-05-17 00:43:19.960513 | orchestrator |  "sdb": { 2025-05-17 00:43:19.966766 | orchestrator |  "osd_lvm_uuid": "a83a275b-240b-53eb-892d-9c3e23ab252d" 2025-05-17 00:43:19.966799 | orchestrator |  }, 2025-05-17 00:43:19.966811 | orchestrator |  "sdc": { 2025-05-17 00:43:19.966819 | orchestrator |  "osd_lvm_uuid": "b4d5f2e3-0e32-57e8-8b55-58d04db15593" 2025-05-17 00:43:19.966826 | orchestrator |  } 2025-05-17 00:43:19.967591 | orchestrator |  } 2025-05-17 00:43:19.968560 | orchestrator | } 2025-05-17 00:43:19.969050 | orchestrator | 2025-05-17 00:43:19.970133 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-17 00:43:19.971039 | orchestrator | Saturday 17 May 2025 00:43:19 +0000 (0:00:00.349) 0:00:40.282 ********** 2025-05-17 00:43:20.096441 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:20.098576 | orchestrator | 2025-05-17 00:43:20.100178 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-17 00:43:20.101214 | orchestrator | Saturday 17 May 2025 00:43:20 +0000 (0:00:00.138) 0:00:40.421 ********** 2025-05-17 00:43:20.246238 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:20.246325 | orchestrator | 2025-05-17 00:43:20.246338 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-17 00:43:20.246373 | orchestrator | Saturday 17 May 2025 00:43:20 +0000 (0:00:00.145) 0:00:40.567 ********** 2025-05-17 00:43:20.382926 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:43:20.384256 | orchestrator | 2025-05-17 00:43:20.385100 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-17 00:43:20.386719 | orchestrator | Saturday 17 May 2025 00:43:20 +0000 (0:00:00.136) 0:00:40.704 ********** 2025-05-17 00:43:20.653465 | orchestrator | changed: [testbed-node-5] => { 2025-05-17 00:43:20.654686 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-17 00:43:20.655674 | orchestrator |  "ceph_osd_devices": { 2025-05-17 00:43:20.657192 | orchestrator |  "sdb": { 2025-05-17 00:43:20.657639 | orchestrator |  "osd_lvm_uuid": "a83a275b-240b-53eb-892d-9c3e23ab252d" 2025-05-17 00:43:20.659123 | orchestrator |  }, 2025-05-17 00:43:20.659446 | orchestrator |  "sdc": { 2025-05-17 00:43:20.660039 | orchestrator |  "osd_lvm_uuid": "b4d5f2e3-0e32-57e8-8b55-58d04db15593" 2025-05-17 00:43:20.660692 | orchestrator |  } 2025-05-17 00:43:20.661071 | orchestrator |  }, 2025-05-17 00:43:20.662570 | orchestrator |  "lvm_volumes": [ 2025-05-17 00:43:20.663137 | orchestrator |  { 2025-05-17 00:43:20.663800 | orchestrator |  "data": "osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d", 2025-05-17 00:43:20.664661 | orchestrator |  "data_vg": "ceph-a83a275b-240b-53eb-892d-9c3e23ab252d" 2025-05-17 00:43:20.665002 | orchestrator |  }, 2025-05-17 00:43:20.665476 | orchestrator |  { 2025-05-17 00:43:20.665772 | orchestrator |  "data": "osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593", 2025-05-17 00:43:20.666059 | orchestrator |  "data_vg": "ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593" 2025-05-17 00:43:20.666374 | orchestrator |  } 2025-05-17 00:43:20.667166 | orchestrator |  ] 2025-05-17 00:43:20.667188 | orchestrator |  } 2025-05-17 00:43:20.667550 | orchestrator | } 2025-05-17 00:43:20.668043 | orchestrator | 2025-05-17 00:43:20.668162 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-17 00:43:20.668568 | orchestrator | Saturday 17 May 2025 00:43:20 +0000 (0:00:00.273) 0:00:40.977 ********** 2025-05-17 00:43:21.772223 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-17 00:43:21.772340 | orchestrator | 2025-05-17 00:43:21.772979 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:43:21.773028 | orchestrator | 2025-05-17 00:43:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:43:21.773469 | orchestrator | 2025-05-17 00:43:21 | INFO  | Please wait and do not abort execution. 2025-05-17 00:43:21.774187 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-17 00:43:21.774714 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-17 00:43:21.775465 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-17 00:43:21.775888 | orchestrator | 2025-05-17 00:43:21.776295 | orchestrator | 2025-05-17 00:43:21.776816 | orchestrator | 2025-05-17 00:43:21.777609 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 00:43:21.778082 | orchestrator | Saturday 17 May 2025 00:43:21 +0000 (0:00:01.118) 0:00:42.096 ********** 2025-05-17 00:43:21.779214 | orchestrator | =============================================================================== 2025-05-17 00:43:21.780190 | orchestrator | Write configuration file ------------------------------------------------ 4.43s 2025-05-17 00:43:21.780210 | orchestrator | Add known links to the list of available block devices ------------------ 1.39s 2025-05-17 00:43:21.780810 | orchestrator | Add known partitions to the list of available block devices ------------- 1.32s 2025-05-17 00:43:21.783667 | orchestrator | Print configuration data ------------------------------------------------ 1.00s 2025-05-17 00:43:21.783698 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-05-17 00:43:21.783709 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-05-17 00:43:21.783720 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-05-17 00:43:21.783731 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.80s 2025-05-17 00:43:21.783742 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.79s 2025-05-17 00:43:21.784078 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2025-05-17 00:43:21.784492 | orchestrator | Get initial list of available block devices ----------------------------- 0.76s 2025-05-17 00:43:21.784670 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.67s 2025-05-17 00:43:21.785566 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-05-17 00:43:21.785751 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-05-17 00:43:21.786215 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-05-17 00:43:21.787105 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-05-17 00:43:21.787128 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-05-17 00:43:21.787392 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.59s 2025-05-17 00:43:21.788065 | orchestrator | Generate DB VG names ---------------------------------------------------- 0.58s 2025-05-17 00:43:21.788263 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2025-05-17 00:43:33.841682 | orchestrator | 2025-05-17 00:43:33 | INFO  | Task f3b84508-13c7-49da-a808-c11ea4739979 is running in background. Output coming soon. 2025-05-17 00:44:09.829048 | orchestrator | 2025-05-17 00:44:01 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-17 00:44:09.829203 | orchestrator | 2025-05-17 00:44:01 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-17 00:44:09.829235 | orchestrator | 2025-05-17 00:44:01 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-17 00:44:09.829248 | orchestrator | 2025-05-17 00:44:01 | INFO  | Handling group overwrites in 99-overwrite 2025-05-17 00:44:09.829260 | orchestrator | 2025-05-17 00:44:01 | INFO  | Removing group frr:children from 60-generic 2025-05-17 00:44:09.829271 | orchestrator | 2025-05-17 00:44:01 | INFO  | Removing group storage:children from 50-kolla 2025-05-17 00:44:09.829282 | orchestrator | 2025-05-17 00:44:01 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-17 00:44:09.829293 | orchestrator | 2025-05-17 00:44:01 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-17 00:44:09.829304 | orchestrator | 2025-05-17 00:44:01 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-17 00:44:09.829315 | orchestrator | 2025-05-17 00:44:01 | INFO  | Handling group overwrites in 20-roles 2025-05-17 00:44:09.829326 | orchestrator | 2025-05-17 00:44:01 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-17 00:44:09.829337 | orchestrator | 2025-05-17 00:44:02 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-05-17 00:44:09.829348 | orchestrator | 2025-05-17 00:44:09 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-05-17 00:44:11.437150 | orchestrator | 2025-05-17 00:44:11 | INFO  | Task 7a03cc6a-7453-40a1-81f2-b120d90547d0 (ceph-create-lvm-devices) was prepared for execution. 2025-05-17 00:44:11.437262 | orchestrator | 2025-05-17 00:44:11 | INFO  | It takes a moment until task 7a03cc6a-7453-40a1-81f2-b120d90547d0 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-17 00:44:14.317448 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-17 00:44:14.797671 | orchestrator | 2025-05-17 00:44:14.799337 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-17 00:44:14.799499 | orchestrator | 2025-05-17 00:44:14.800555 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-17 00:44:14.801450 | orchestrator | Saturday 17 May 2025 00:44:14 +0000 (0:00:00.414) 0:00:00.414 ********** 2025-05-17 00:44:15.034274 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-17 00:44:15.034442 | orchestrator | 2025-05-17 00:44:15.035975 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-17 00:44:15.038100 | orchestrator | Saturday 17 May 2025 00:44:15 +0000 (0:00:00.237) 0:00:00.651 ********** 2025-05-17 00:44:15.251423 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:44:15.251627 | orchestrator | 2025-05-17 00:44:15.252310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:15.252810 | orchestrator | Saturday 17 May 2025 00:44:15 +0000 (0:00:00.217) 0:00:00.869 ********** 2025-05-17 00:44:15.943490 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-17 00:44:15.944309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-17 00:44:15.947072 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-17 00:44:15.947097 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-17 00:44:15.948011 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-17 00:44:15.948603 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-17 00:44:15.949361 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-17 00:44:15.949998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-17 00:44:15.950934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-17 00:44:15.951437 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-17 00:44:15.952047 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-17 00:44:15.952618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-17 00:44:15.953304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-17 00:44:15.953746 | orchestrator | 2025-05-17 00:44:15.954507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:15.955279 | orchestrator | Saturday 17 May 2025 00:44:15 +0000 (0:00:00.690) 0:00:01.560 ********** 2025-05-17 00:44:16.142720 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:16.142972 | orchestrator | 2025-05-17 00:44:16.143629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:16.145174 | orchestrator | Saturday 17 May 2025 00:44:16 +0000 (0:00:00.199) 0:00:01.759 ********** 2025-05-17 00:44:16.332448 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:16.332632 | orchestrator | 2025-05-17 00:44:16.333394 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:16.335038 | orchestrator | Saturday 17 May 2025 00:44:16 +0000 (0:00:00.190) 0:00:01.950 ********** 2025-05-17 00:44:16.516804 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:16.516982 | orchestrator | 2025-05-17 00:44:16.517162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:16.517514 | orchestrator | Saturday 17 May 2025 00:44:16 +0000 (0:00:00.181) 0:00:02.131 ********** 2025-05-17 00:44:16.705070 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:16.705266 | orchestrator | 2025-05-17 00:44:16.705635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:16.706440 | orchestrator | Saturday 17 May 2025 00:44:16 +0000 (0:00:00.190) 0:00:02.322 ********** 2025-05-17 00:44:16.907344 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:16.907438 | orchestrator | 2025-05-17 00:44:16.907453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:16.907573 | orchestrator | Saturday 17 May 2025 00:44:16 +0000 (0:00:00.202) 0:00:02.525 ********** 2025-05-17 00:44:17.101231 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:17.101435 | orchestrator | 2025-05-17 00:44:17.102400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:17.103424 | orchestrator | Saturday 17 May 2025 00:44:17 +0000 (0:00:00.193) 0:00:02.718 ********** 2025-05-17 00:44:17.291357 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:17.292114 | orchestrator | 2025-05-17 00:44:17.292820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:17.293289 | orchestrator | Saturday 17 May 2025 00:44:17 +0000 (0:00:00.191) 0:00:02.909 ********** 2025-05-17 00:44:17.492508 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:17.493498 | orchestrator | 2025-05-17 00:44:17.493657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:17.494333 | orchestrator | Saturday 17 May 2025 00:44:17 +0000 (0:00:00.199) 0:00:03.109 ********** 2025-05-17 00:44:18.080797 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c) 2025-05-17 00:44:18.081732 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c) 2025-05-17 00:44:18.082589 | orchestrator | 2025-05-17 00:44:18.084584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:18.084607 | orchestrator | Saturday 17 May 2025 00:44:18 +0000 (0:00:00.589) 0:00:03.698 ********** 2025-05-17 00:44:18.866383 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4c541808-fecb-473a-bfa6-e6107b1a17c0) 2025-05-17 00:44:18.869759 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4c541808-fecb-473a-bfa6-e6107b1a17c0) 2025-05-17 00:44:18.869790 | orchestrator | 2025-05-17 00:44:18.871273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:18.871300 | orchestrator | Saturday 17 May 2025 00:44:18 +0000 (0:00:00.783) 0:00:04.482 ********** 2025-05-17 00:44:19.312601 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0e5716a4-9f06-4595-a8e5-44869be2d3e3) 2025-05-17 00:44:19.313564 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0e5716a4-9f06-4595-a8e5-44869be2d3e3) 2025-05-17 00:44:19.314489 | orchestrator | 2025-05-17 00:44:19.316757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:19.316781 | orchestrator | Saturday 17 May 2025 00:44:19 +0000 (0:00:00.447) 0:00:04.930 ********** 2025-05-17 00:44:19.752303 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6120ef73-2521-4d83-8ac9-34a2289f978b) 2025-05-17 00:44:19.752636 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6120ef73-2521-4d83-8ac9-34a2289f978b) 2025-05-17 00:44:19.753226 | orchestrator | 2025-05-17 00:44:19.755687 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:19.756046 | orchestrator | Saturday 17 May 2025 00:44:19 +0000 (0:00:00.438) 0:00:05.368 ********** 2025-05-17 00:44:20.083436 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-17 00:44:20.083641 | orchestrator | 2025-05-17 00:44:20.085237 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:20.086531 | orchestrator | Saturday 17 May 2025 00:44:20 +0000 (0:00:00.331) 0:00:05.700 ********** 2025-05-17 00:44:20.536794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-17 00:44:20.536984 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-17 00:44:20.538565 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-17 00:44:20.539732 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-17 00:44:20.540662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-17 00:44:20.542348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-17 00:44:20.542770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-17 00:44:20.543407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-17 00:44:20.543747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-17 00:44:20.544088 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-17 00:44:20.544562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-17 00:44:20.545178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-17 00:44:20.545234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-17 00:44:20.545776 | orchestrator | 2025-05-17 00:44:20.547760 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:20.547853 | orchestrator | Saturday 17 May 2025 00:44:20 +0000 (0:00:00.451) 0:00:06.151 ********** 2025-05-17 00:44:20.742754 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:20.743068 | orchestrator | 2025-05-17 00:44:20.744116 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:20.744478 | orchestrator | Saturday 17 May 2025 00:44:20 +0000 (0:00:00.208) 0:00:06.360 ********** 2025-05-17 00:44:20.929093 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:20.930134 | orchestrator | 2025-05-17 00:44:20.931237 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:20.932277 | orchestrator | Saturday 17 May 2025 00:44:20 +0000 (0:00:00.187) 0:00:06.547 ********** 2025-05-17 00:44:21.119079 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:21.119295 | orchestrator | 2025-05-17 00:44:21.120561 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:21.122615 | orchestrator | Saturday 17 May 2025 00:44:21 +0000 (0:00:00.189) 0:00:06.736 ********** 2025-05-17 00:44:21.304321 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:21.304524 | orchestrator | 2025-05-17 00:44:21.305769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:21.306624 | orchestrator | Saturday 17 May 2025 00:44:21 +0000 (0:00:00.185) 0:00:06.921 ********** 2025-05-17 00:44:21.834369 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:21.834938 | orchestrator | 2025-05-17 00:44:21.835744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:21.837973 | orchestrator | Saturday 17 May 2025 00:44:21 +0000 (0:00:00.529) 0:00:07.451 ********** 2025-05-17 00:44:22.048306 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:22.048410 | orchestrator | 2025-05-17 00:44:22.049855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:22.050196 | orchestrator | Saturday 17 May 2025 00:44:22 +0000 (0:00:00.214) 0:00:07.666 ********** 2025-05-17 00:44:22.241508 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:22.242341 | orchestrator | 2025-05-17 00:44:22.243076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:22.244157 | orchestrator | Saturday 17 May 2025 00:44:22 +0000 (0:00:00.194) 0:00:07.860 ********** 2025-05-17 00:44:22.440968 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:22.442451 | orchestrator | 2025-05-17 00:44:22.442580 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:22.445846 | orchestrator | Saturday 17 May 2025 00:44:22 +0000 (0:00:00.196) 0:00:08.056 ********** 2025-05-17 00:44:23.076651 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-17 00:44:23.076882 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-17 00:44:23.077359 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-17 00:44:23.077630 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-17 00:44:23.078198 | orchestrator | 2025-05-17 00:44:23.078394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:23.079052 | orchestrator | Saturday 17 May 2025 00:44:23 +0000 (0:00:00.637) 0:00:08.694 ********** 2025-05-17 00:44:23.281654 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:23.282844 | orchestrator | 2025-05-17 00:44:23.283360 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:23.284573 | orchestrator | Saturday 17 May 2025 00:44:23 +0000 (0:00:00.205) 0:00:08.899 ********** 2025-05-17 00:44:23.466730 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:23.466968 | orchestrator | 2025-05-17 00:44:23.467967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:23.468761 | orchestrator | Saturday 17 May 2025 00:44:23 +0000 (0:00:00.184) 0:00:09.083 ********** 2025-05-17 00:44:23.660284 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:23.662782 | orchestrator | 2025-05-17 00:44:23.662819 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:23.663604 | orchestrator | Saturday 17 May 2025 00:44:23 +0000 (0:00:00.193) 0:00:09.277 ********** 2025-05-17 00:44:23.850214 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:23.851269 | orchestrator | 2025-05-17 00:44:23.851880 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-17 00:44:23.852927 | orchestrator | Saturday 17 May 2025 00:44:23 +0000 (0:00:00.189) 0:00:09.467 ********** 2025-05-17 00:44:23.985848 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:23.986844 | orchestrator | 2025-05-17 00:44:23.987528 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-17 00:44:23.988088 | orchestrator | Saturday 17 May 2025 00:44:23 +0000 (0:00:00.136) 0:00:09.603 ********** 2025-05-17 00:44:24.207639 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7dd92559-5dfb-56e9-86ff-64c31a268c5e'}}) 2025-05-17 00:44:24.208267 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '25c991a6-e724-5c1a-b659-154410c60242'}}) 2025-05-17 00:44:24.209981 | orchestrator | 2025-05-17 00:44:24.211238 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-17 00:44:24.212247 | orchestrator | Saturday 17 May 2025 00:44:24 +0000 (0:00:00.221) 0:00:09.824 ********** 2025-05-17 00:44:26.515881 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'}) 2025-05-17 00:44:26.516266 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'}) 2025-05-17 00:44:26.516578 | orchestrator | 2025-05-17 00:44:26.517074 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-17 00:44:26.517444 | orchestrator | Saturday 17 May 2025 00:44:26 +0000 (0:00:02.307) 0:00:12.132 ********** 2025-05-17 00:44:26.684852 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:26.685070 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:26.685179 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:26.685703 | orchestrator | 2025-05-17 00:44:26.686093 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-17 00:44:26.686591 | orchestrator | Saturday 17 May 2025 00:44:26 +0000 (0:00:00.168) 0:00:12.301 ********** 2025-05-17 00:44:28.174778 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'}) 2025-05-17 00:44:28.174886 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'}) 2025-05-17 00:44:28.175110 | orchestrator | 2025-05-17 00:44:28.176310 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-17 00:44:28.177348 | orchestrator | Saturday 17 May 2025 00:44:28 +0000 (0:00:01.488) 0:00:13.790 ********** 2025-05-17 00:44:28.341362 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:28.341578 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:28.342973 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:28.344132 | orchestrator | 2025-05-17 00:44:28.346059 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-17 00:44:28.346453 | orchestrator | Saturday 17 May 2025 00:44:28 +0000 (0:00:00.167) 0:00:13.957 ********** 2025-05-17 00:44:28.483224 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:28.483372 | orchestrator | 2025-05-17 00:44:28.484258 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-17 00:44:28.485034 | orchestrator | Saturday 17 May 2025 00:44:28 +0000 (0:00:00.142) 0:00:14.100 ********** 2025-05-17 00:44:28.654269 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:28.657383 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:28.657415 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:28.657752 | orchestrator | 2025-05-17 00:44:28.658764 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-17 00:44:28.659697 | orchestrator | Saturday 17 May 2025 00:44:28 +0000 (0:00:00.169) 0:00:14.270 ********** 2025-05-17 00:44:28.795663 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:28.796194 | orchestrator | 2025-05-17 00:44:28.797537 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-17 00:44:28.798173 | orchestrator | Saturday 17 May 2025 00:44:28 +0000 (0:00:00.143) 0:00:14.413 ********** 2025-05-17 00:44:28.977046 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:28.977606 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:28.978799 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:28.979672 | orchestrator | 2025-05-17 00:44:28.980173 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-17 00:44:28.980842 | orchestrator | Saturday 17 May 2025 00:44:28 +0000 (0:00:00.181) 0:00:14.595 ********** 2025-05-17 00:44:29.271113 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:29.271430 | orchestrator | 2025-05-17 00:44:29.272394 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-17 00:44:29.273579 | orchestrator | Saturday 17 May 2025 00:44:29 +0000 (0:00:00.290) 0:00:14.885 ********** 2025-05-17 00:44:29.435741 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:29.437207 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:29.438666 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:29.440018 | orchestrator | 2025-05-17 00:44:29.441171 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-17 00:44:29.441811 | orchestrator | Saturday 17 May 2025 00:44:29 +0000 (0:00:00.168) 0:00:15.054 ********** 2025-05-17 00:44:29.578989 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:44:29.580538 | orchestrator | 2025-05-17 00:44:29.582448 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-17 00:44:29.583026 | orchestrator | Saturday 17 May 2025 00:44:29 +0000 (0:00:00.142) 0:00:15.196 ********** 2025-05-17 00:44:29.746582 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:29.747522 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:29.747548 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:29.748332 | orchestrator | 2025-05-17 00:44:29.748754 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-17 00:44:29.749664 | orchestrator | Saturday 17 May 2025 00:44:29 +0000 (0:00:00.167) 0:00:15.364 ********** 2025-05-17 00:44:29.908822 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:29.909560 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:29.910576 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:29.911184 | orchestrator | 2025-05-17 00:44:29.912030 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-17 00:44:29.913012 | orchestrator | Saturday 17 May 2025 00:44:29 +0000 (0:00:00.162) 0:00:15.527 ********** 2025-05-17 00:44:30.062161 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:30.062784 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:30.064855 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:30.068092 | orchestrator | 2025-05-17 00:44:30.068134 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-17 00:44:30.068148 | orchestrator | Saturday 17 May 2025 00:44:30 +0000 (0:00:00.151) 0:00:15.679 ********** 2025-05-17 00:44:30.197939 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:30.199412 | orchestrator | 2025-05-17 00:44:30.200620 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-17 00:44:30.201528 | orchestrator | Saturday 17 May 2025 00:44:30 +0000 (0:00:00.136) 0:00:15.815 ********** 2025-05-17 00:44:30.344458 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:30.345371 | orchestrator | 2025-05-17 00:44:30.346433 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-17 00:44:30.348507 | orchestrator | Saturday 17 May 2025 00:44:30 +0000 (0:00:00.146) 0:00:15.961 ********** 2025-05-17 00:44:30.497445 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:30.497550 | orchestrator | 2025-05-17 00:44:30.499603 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-17 00:44:30.500102 | orchestrator | Saturday 17 May 2025 00:44:30 +0000 (0:00:00.150) 0:00:16.112 ********** 2025-05-17 00:44:30.640772 | orchestrator | ok: [testbed-node-3] => { 2025-05-17 00:44:30.641623 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-17 00:44:30.644465 | orchestrator | } 2025-05-17 00:44:30.644489 | orchestrator | 2025-05-17 00:44:30.644539 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-17 00:44:30.644552 | orchestrator | Saturday 17 May 2025 00:44:30 +0000 (0:00:00.144) 0:00:16.257 ********** 2025-05-17 00:44:30.786661 | orchestrator | ok: [testbed-node-3] => { 2025-05-17 00:44:30.789477 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-17 00:44:30.789534 | orchestrator | } 2025-05-17 00:44:30.789548 | orchestrator | 2025-05-17 00:44:30.789631 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-17 00:44:30.790124 | orchestrator | Saturday 17 May 2025 00:44:30 +0000 (0:00:00.145) 0:00:16.402 ********** 2025-05-17 00:44:30.974338 | orchestrator | ok: [testbed-node-3] => { 2025-05-17 00:44:30.974438 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-17 00:44:30.975676 | orchestrator | } 2025-05-17 00:44:30.976679 | orchestrator | 2025-05-17 00:44:30.978198 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-17 00:44:30.979372 | orchestrator | Saturday 17 May 2025 00:44:30 +0000 (0:00:00.187) 0:00:16.590 ********** 2025-05-17 00:44:32.021769 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:44:32.021887 | orchestrator | 2025-05-17 00:44:32.023269 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-17 00:44:32.026207 | orchestrator | Saturday 17 May 2025 00:44:32 +0000 (0:00:01.046) 0:00:17.637 ********** 2025-05-17 00:44:32.547525 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:44:32.547630 | orchestrator | 2025-05-17 00:44:32.547647 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-17 00:44:32.547660 | orchestrator | Saturday 17 May 2025 00:44:32 +0000 (0:00:00.526) 0:00:18.164 ********** 2025-05-17 00:44:33.088458 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:44:33.088581 | orchestrator | 2025-05-17 00:44:33.088701 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-17 00:44:33.090102 | orchestrator | Saturday 17 May 2025 00:44:33 +0000 (0:00:00.540) 0:00:18.704 ********** 2025-05-17 00:44:33.238613 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:44:33.239284 | orchestrator | 2025-05-17 00:44:33.240093 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-17 00:44:33.240951 | orchestrator | Saturday 17 May 2025 00:44:33 +0000 (0:00:00.151) 0:00:18.856 ********** 2025-05-17 00:44:33.336798 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:33.337618 | orchestrator | 2025-05-17 00:44:33.338971 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-17 00:44:33.339826 | orchestrator | Saturday 17 May 2025 00:44:33 +0000 (0:00:00.097) 0:00:18.953 ********** 2025-05-17 00:44:33.449750 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:33.450727 | orchestrator | 2025-05-17 00:44:33.451101 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-17 00:44:33.452004 | orchestrator | Saturday 17 May 2025 00:44:33 +0000 (0:00:00.113) 0:00:19.067 ********** 2025-05-17 00:44:33.597421 | orchestrator | ok: [testbed-node-3] => { 2025-05-17 00:44:33.597947 | orchestrator |  "vgs_report": { 2025-05-17 00:44:33.598819 | orchestrator |  "vg": [] 2025-05-17 00:44:33.599865 | orchestrator |  } 2025-05-17 00:44:33.600511 | orchestrator | } 2025-05-17 00:44:33.601382 | orchestrator | 2025-05-17 00:44:33.601824 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-17 00:44:33.602355 | orchestrator | Saturday 17 May 2025 00:44:33 +0000 (0:00:00.147) 0:00:19.214 ********** 2025-05-17 00:44:33.745256 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:33.745594 | orchestrator | 2025-05-17 00:44:33.748207 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-17 00:44:33.748816 | orchestrator | Saturday 17 May 2025 00:44:33 +0000 (0:00:00.145) 0:00:19.360 ********** 2025-05-17 00:44:33.881153 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:33.881692 | orchestrator | 2025-05-17 00:44:33.882586 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-17 00:44:33.884140 | orchestrator | Saturday 17 May 2025 00:44:33 +0000 (0:00:00.138) 0:00:19.498 ********** 2025-05-17 00:44:34.005257 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:34.005928 | orchestrator | 2025-05-17 00:44:34.006717 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-17 00:44:34.007597 | orchestrator | Saturday 17 May 2025 00:44:34 +0000 (0:00:00.124) 0:00:19.623 ********** 2025-05-17 00:44:34.148951 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:34.149773 | orchestrator | 2025-05-17 00:44:34.150849 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-17 00:44:34.152130 | orchestrator | Saturday 17 May 2025 00:44:34 +0000 (0:00:00.142) 0:00:19.765 ********** 2025-05-17 00:44:34.464144 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:34.467954 | orchestrator | 2025-05-17 00:44:34.468308 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-17 00:44:34.468721 | orchestrator | Saturday 17 May 2025 00:44:34 +0000 (0:00:00.313) 0:00:20.079 ********** 2025-05-17 00:44:34.603409 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:34.603770 | orchestrator | 2025-05-17 00:44:34.604774 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-17 00:44:34.605681 | orchestrator | Saturday 17 May 2025 00:44:34 +0000 (0:00:00.142) 0:00:20.221 ********** 2025-05-17 00:44:34.751842 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:34.752130 | orchestrator | 2025-05-17 00:44:34.752158 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-17 00:44:34.752171 | orchestrator | Saturday 17 May 2025 00:44:34 +0000 (0:00:00.147) 0:00:20.368 ********** 2025-05-17 00:44:34.884618 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:34.884728 | orchestrator | 2025-05-17 00:44:34.885229 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-17 00:44:34.885707 | orchestrator | Saturday 17 May 2025 00:44:34 +0000 (0:00:00.130) 0:00:20.498 ********** 2025-05-17 00:44:35.018462 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:35.018756 | orchestrator | 2025-05-17 00:44:35.019699 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-17 00:44:35.020407 | orchestrator | Saturday 17 May 2025 00:44:35 +0000 (0:00:00.137) 0:00:20.636 ********** 2025-05-17 00:44:35.151527 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:35.151781 | orchestrator | 2025-05-17 00:44:35.152117 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-17 00:44:35.153031 | orchestrator | Saturday 17 May 2025 00:44:35 +0000 (0:00:00.132) 0:00:20.769 ********** 2025-05-17 00:44:35.296107 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:35.296816 | orchestrator | 2025-05-17 00:44:35.298272 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-17 00:44:35.298635 | orchestrator | Saturday 17 May 2025 00:44:35 +0000 (0:00:00.144) 0:00:20.914 ********** 2025-05-17 00:44:35.433677 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:35.435015 | orchestrator | 2025-05-17 00:44:35.435600 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-17 00:44:35.436398 | orchestrator | Saturday 17 May 2025 00:44:35 +0000 (0:00:00.137) 0:00:21.051 ********** 2025-05-17 00:44:35.571613 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:35.571800 | orchestrator | 2025-05-17 00:44:35.572102 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-17 00:44:35.572463 | orchestrator | Saturday 17 May 2025 00:44:35 +0000 (0:00:00.138) 0:00:21.190 ********** 2025-05-17 00:44:35.698605 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:35.699119 | orchestrator | 2025-05-17 00:44:35.699835 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-17 00:44:35.700402 | orchestrator | Saturday 17 May 2025 00:44:35 +0000 (0:00:00.119) 0:00:21.309 ********** 2025-05-17 00:44:35.851521 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:35.854108 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:35.857258 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:35.857293 | orchestrator | 2025-05-17 00:44:35.859605 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-17 00:44:35.862171 | orchestrator | Saturday 17 May 2025 00:44:35 +0000 (0:00:00.159) 0:00:21.469 ********** 2025-05-17 00:44:36.006317 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:36.006403 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:36.006439 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:36.006454 | orchestrator | 2025-05-17 00:44:36.006554 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-17 00:44:36.006571 | orchestrator | Saturday 17 May 2025 00:44:36 +0000 (0:00:00.153) 0:00:21.623 ********** 2025-05-17 00:44:36.369836 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:36.370334 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:36.372962 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:36.374310 | orchestrator | 2025-05-17 00:44:36.375287 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-17 00:44:36.376472 | orchestrator | Saturday 17 May 2025 00:44:36 +0000 (0:00:00.363) 0:00:21.987 ********** 2025-05-17 00:44:36.544387 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:36.544542 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:36.544701 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:36.545183 | orchestrator | 2025-05-17 00:44:36.545484 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-17 00:44:36.546114 | orchestrator | Saturday 17 May 2025 00:44:36 +0000 (0:00:00.175) 0:00:22.162 ********** 2025-05-17 00:44:36.732602 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:36.732700 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:36.733240 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:36.733764 | orchestrator | 2025-05-17 00:44:36.734383 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-17 00:44:36.734693 | orchestrator | Saturday 17 May 2025 00:44:36 +0000 (0:00:00.187) 0:00:22.350 ********** 2025-05-17 00:44:36.903886 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:36.904357 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:36.905281 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:36.906191 | orchestrator | 2025-05-17 00:44:36.906686 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-17 00:44:36.907408 | orchestrator | Saturday 17 May 2025 00:44:36 +0000 (0:00:00.171) 0:00:22.522 ********** 2025-05-17 00:44:37.071378 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:37.071923 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:37.072853 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:37.073604 | orchestrator | 2025-05-17 00:44:37.074873 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-17 00:44:37.075330 | orchestrator | Saturday 17 May 2025 00:44:37 +0000 (0:00:00.166) 0:00:22.689 ********** 2025-05-17 00:44:37.233784 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:37.234125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:37.235247 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:37.236393 | orchestrator | 2025-05-17 00:44:37.237166 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-17 00:44:37.237844 | orchestrator | Saturday 17 May 2025 00:44:37 +0000 (0:00:00.162) 0:00:22.851 ********** 2025-05-17 00:44:37.776969 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:44:37.777070 | orchestrator | 2025-05-17 00:44:37.777086 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-17 00:44:37.777983 | orchestrator | Saturday 17 May 2025 00:44:37 +0000 (0:00:00.540) 0:00:23.392 ********** 2025-05-17 00:44:38.293305 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:44:38.293414 | orchestrator | 2025-05-17 00:44:38.293498 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-17 00:44:38.293825 | orchestrator | Saturday 17 May 2025 00:44:38 +0000 (0:00:00.519) 0:00:23.911 ********** 2025-05-17 00:44:38.455315 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:44:38.455412 | orchestrator | 2025-05-17 00:44:38.456287 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-17 00:44:38.457219 | orchestrator | Saturday 17 May 2025 00:44:38 +0000 (0:00:00.161) 0:00:24.073 ********** 2025-05-17 00:44:38.652565 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'vg_name': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'}) 2025-05-17 00:44:38.652959 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'vg_name': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'}) 2025-05-17 00:44:38.652989 | orchestrator | 2025-05-17 00:44:38.654767 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-17 00:44:38.657069 | orchestrator | Saturday 17 May 2025 00:44:38 +0000 (0:00:00.196) 0:00:24.270 ********** 2025-05-17 00:44:38.998272 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:38.999044 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:39.002114 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:39.002148 | orchestrator | 2025-05-17 00:44:39.002341 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-17 00:44:39.003748 | orchestrator | Saturday 17 May 2025 00:44:38 +0000 (0:00:00.344) 0:00:24.615 ********** 2025-05-17 00:44:39.192527 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:39.192634 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:39.193052 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:39.193549 | orchestrator | 2025-05-17 00:44:39.194413 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-17 00:44:39.194910 | orchestrator | Saturday 17 May 2025 00:44:39 +0000 (0:00:00.194) 0:00:24.809 ********** 2025-05-17 00:44:39.375557 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'})  2025-05-17 00:44:39.375714 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'})  2025-05-17 00:44:39.377654 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:44:39.378628 | orchestrator | 2025-05-17 00:44:39.380085 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-17 00:44:39.380319 | orchestrator | Saturday 17 May 2025 00:44:39 +0000 (0:00:00.183) 0:00:24.993 ********** 2025-05-17 00:44:40.041323 | orchestrator | ok: [testbed-node-3] => { 2025-05-17 00:44:40.041420 | orchestrator |  "lvm_report": { 2025-05-17 00:44:40.042534 | orchestrator |  "lv": [ 2025-05-17 00:44:40.043441 | orchestrator |  { 2025-05-17 00:44:40.044721 | orchestrator |  "lv_name": "osd-block-25c991a6-e724-5c1a-b659-154410c60242", 2025-05-17 00:44:40.044974 | orchestrator |  "vg_name": "ceph-25c991a6-e724-5c1a-b659-154410c60242" 2025-05-17 00:44:40.045786 | orchestrator |  }, 2025-05-17 00:44:40.047993 | orchestrator |  { 2025-05-17 00:44:40.048554 | orchestrator |  "lv_name": "osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e", 2025-05-17 00:44:40.049200 | orchestrator |  "vg_name": "ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e" 2025-05-17 00:44:40.049526 | orchestrator |  } 2025-05-17 00:44:40.050251 | orchestrator |  ], 2025-05-17 00:44:40.050568 | orchestrator |  "pv": [ 2025-05-17 00:44:40.051010 | orchestrator |  { 2025-05-17 00:44:40.051423 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-17 00:44:40.051992 | orchestrator |  "vg_name": "ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e" 2025-05-17 00:44:40.052223 | orchestrator |  }, 2025-05-17 00:44:40.052638 | orchestrator |  { 2025-05-17 00:44:40.053032 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-17 00:44:40.053478 | orchestrator |  "vg_name": "ceph-25c991a6-e724-5c1a-b659-154410c60242" 2025-05-17 00:44:40.053747 | orchestrator |  } 2025-05-17 00:44:40.054110 | orchestrator |  ] 2025-05-17 00:44:40.054479 | orchestrator |  } 2025-05-17 00:44:40.054724 | orchestrator | } 2025-05-17 00:44:40.055141 | orchestrator | 2025-05-17 00:44:40.055465 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-17 00:44:40.055934 | orchestrator | 2025-05-17 00:44:40.056013 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-17 00:44:40.056332 | orchestrator | Saturday 17 May 2025 00:44:40 +0000 (0:00:00.663) 0:00:25.657 ********** 2025-05-17 00:44:40.644302 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-17 00:44:40.644418 | orchestrator | 2025-05-17 00:44:40.644819 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-17 00:44:40.646310 | orchestrator | Saturday 17 May 2025 00:44:40 +0000 (0:00:00.601) 0:00:26.258 ********** 2025-05-17 00:44:40.872651 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:44:40.875034 | orchestrator | 2025-05-17 00:44:40.875075 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:40.875088 | orchestrator | Saturday 17 May 2025 00:44:40 +0000 (0:00:00.229) 0:00:26.488 ********** 2025-05-17 00:44:41.305390 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-17 00:44:41.306813 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-17 00:44:41.307073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-17 00:44:41.307640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-17 00:44:41.309728 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-17 00:44:41.309787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-17 00:44:41.309799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-17 00:44:41.310657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-17 00:44:41.311515 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-17 00:44:41.312973 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-17 00:44:41.313291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-17 00:44:41.314095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-17 00:44:41.314761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-17 00:44:41.315100 | orchestrator | 2025-05-17 00:44:41.315703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:41.316280 | orchestrator | Saturday 17 May 2025 00:44:41 +0000 (0:00:00.434) 0:00:26.922 ********** 2025-05-17 00:44:41.488405 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:41.489090 | orchestrator | 2025-05-17 00:44:41.490145 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:41.490965 | orchestrator | Saturday 17 May 2025 00:44:41 +0000 (0:00:00.183) 0:00:27.106 ********** 2025-05-17 00:44:41.685709 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:41.685801 | orchestrator | 2025-05-17 00:44:41.686127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:41.686565 | orchestrator | Saturday 17 May 2025 00:44:41 +0000 (0:00:00.196) 0:00:27.303 ********** 2025-05-17 00:44:41.873360 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:41.873985 | orchestrator | 2025-05-17 00:44:41.875186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:41.876124 | orchestrator | Saturday 17 May 2025 00:44:41 +0000 (0:00:00.185) 0:00:27.489 ********** 2025-05-17 00:44:42.060436 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:42.061720 | orchestrator | 2025-05-17 00:44:42.062121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:42.063237 | orchestrator | Saturday 17 May 2025 00:44:42 +0000 (0:00:00.189) 0:00:27.678 ********** 2025-05-17 00:44:42.243869 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:42.244092 | orchestrator | 2025-05-17 00:44:42.244571 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:42.245268 | orchestrator | Saturday 17 May 2025 00:44:42 +0000 (0:00:00.183) 0:00:27.862 ********** 2025-05-17 00:44:42.424156 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:42.424344 | orchestrator | 2025-05-17 00:44:42.425119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:42.427192 | orchestrator | Saturday 17 May 2025 00:44:42 +0000 (0:00:00.178) 0:00:28.041 ********** 2025-05-17 00:44:42.615215 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:42.615707 | orchestrator | 2025-05-17 00:44:42.616203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:42.617123 | orchestrator | Saturday 17 May 2025 00:44:42 +0000 (0:00:00.192) 0:00:28.233 ********** 2025-05-17 00:44:43.188645 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:43.189281 | orchestrator | 2025-05-17 00:44:43.192289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:43.192377 | orchestrator | Saturday 17 May 2025 00:44:43 +0000 (0:00:00.572) 0:00:28.805 ********** 2025-05-17 00:44:43.595607 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee) 2025-05-17 00:44:43.596218 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee) 2025-05-17 00:44:43.597615 | orchestrator | 2025-05-17 00:44:43.598479 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:43.599019 | orchestrator | Saturday 17 May 2025 00:44:43 +0000 (0:00:00.406) 0:00:29.212 ********** 2025-05-17 00:44:44.017335 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6fc6848d-5127-4f65-b412-e829995e25e7) 2025-05-17 00:44:44.017492 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6fc6848d-5127-4f65-b412-e829995e25e7) 2025-05-17 00:44:44.018440 | orchestrator | 2025-05-17 00:44:44.019147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:44.020247 | orchestrator | Saturday 17 May 2025 00:44:44 +0000 (0:00:00.421) 0:00:29.633 ********** 2025-05-17 00:44:44.461434 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e3068b10-d912-449c-8868-8ffe0bc578f0) 2025-05-17 00:44:44.461538 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e3068b10-d912-449c-8868-8ffe0bc578f0) 2025-05-17 00:44:44.465083 | orchestrator | 2025-05-17 00:44:44.465188 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:44.465215 | orchestrator | Saturday 17 May 2025 00:44:44 +0000 (0:00:00.443) 0:00:30.077 ********** 2025-05-17 00:44:44.886684 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bec56d32-b1fb-48c0-a20f-a6daa2f9686d) 2025-05-17 00:44:44.887086 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bec56d32-b1fb-48c0-a20f-a6daa2f9686d) 2025-05-17 00:44:44.888419 | orchestrator | 2025-05-17 00:44:44.888567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:44:44.891140 | orchestrator | Saturday 17 May 2025 00:44:44 +0000 (0:00:00.426) 0:00:30.503 ********** 2025-05-17 00:44:45.211980 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-17 00:44:45.212129 | orchestrator | 2025-05-17 00:44:45.212216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:45.212487 | orchestrator | Saturday 17 May 2025 00:44:45 +0000 (0:00:00.326) 0:00:30.830 ********** 2025-05-17 00:44:45.676469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-17 00:44:45.677353 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-17 00:44:45.677471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-17 00:44:45.678510 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-17 00:44:45.679776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-17 00:44:45.680972 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-17 00:44:45.681826 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-17 00:44:45.682716 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-17 00:44:45.683049 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-17 00:44:45.683529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-17 00:44:45.683773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-17 00:44:45.684280 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-17 00:44:45.684594 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-17 00:44:45.684918 | orchestrator | 2025-05-17 00:44:45.685320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:45.685692 | orchestrator | Saturday 17 May 2025 00:44:45 +0000 (0:00:00.462) 0:00:31.293 ********** 2025-05-17 00:44:45.891202 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:45.891624 | orchestrator | 2025-05-17 00:44:45.892013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:45.892752 | orchestrator | Saturday 17 May 2025 00:44:45 +0000 (0:00:00.215) 0:00:31.508 ********** 2025-05-17 00:44:46.103522 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:46.104834 | orchestrator | 2025-05-17 00:44:46.107117 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:46.107550 | orchestrator | Saturday 17 May 2025 00:44:46 +0000 (0:00:00.211) 0:00:31.719 ********** 2025-05-17 00:44:46.675271 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:46.676450 | orchestrator | 2025-05-17 00:44:46.677520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:46.678633 | orchestrator | Saturday 17 May 2025 00:44:46 +0000 (0:00:00.573) 0:00:32.293 ********** 2025-05-17 00:44:46.877603 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:46.878272 | orchestrator | 2025-05-17 00:44:46.880237 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:46.881410 | orchestrator | Saturday 17 May 2025 00:44:46 +0000 (0:00:00.202) 0:00:32.495 ********** 2025-05-17 00:44:47.081322 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:47.083849 | orchestrator | 2025-05-17 00:44:47.083883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:47.083897 | orchestrator | Saturday 17 May 2025 00:44:47 +0000 (0:00:00.202) 0:00:32.697 ********** 2025-05-17 00:44:47.273536 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:47.274216 | orchestrator | 2025-05-17 00:44:47.275467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:47.277081 | orchestrator | Saturday 17 May 2025 00:44:47 +0000 (0:00:00.192) 0:00:32.890 ********** 2025-05-17 00:44:47.477391 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:47.477488 | orchestrator | 2025-05-17 00:44:47.477702 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:47.479881 | orchestrator | Saturday 17 May 2025 00:44:47 +0000 (0:00:00.203) 0:00:33.093 ********** 2025-05-17 00:44:47.679314 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:47.680294 | orchestrator | 2025-05-17 00:44:47.681349 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:47.681988 | orchestrator | Saturday 17 May 2025 00:44:47 +0000 (0:00:00.203) 0:00:33.296 ********** 2025-05-17 00:44:48.314578 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-17 00:44:48.316117 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-17 00:44:48.317056 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-17 00:44:48.317847 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-17 00:44:48.318765 | orchestrator | 2025-05-17 00:44:48.319140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:48.319726 | orchestrator | Saturday 17 May 2025 00:44:48 +0000 (0:00:00.633) 0:00:33.930 ********** 2025-05-17 00:44:48.537264 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:48.538087 | orchestrator | 2025-05-17 00:44:48.539049 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:48.539433 | orchestrator | Saturday 17 May 2025 00:44:48 +0000 (0:00:00.224) 0:00:34.155 ********** 2025-05-17 00:44:48.742533 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:48.742774 | orchestrator | 2025-05-17 00:44:48.743554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:48.743977 | orchestrator | Saturday 17 May 2025 00:44:48 +0000 (0:00:00.204) 0:00:34.359 ********** 2025-05-17 00:44:48.947499 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:48.948209 | orchestrator | 2025-05-17 00:44:48.948715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:44:48.952154 | orchestrator | Saturday 17 May 2025 00:44:48 +0000 (0:00:00.205) 0:00:34.565 ********** 2025-05-17 00:44:49.550136 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:49.550360 | orchestrator | 2025-05-17 00:44:49.551386 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-17 00:44:49.552041 | orchestrator | Saturday 17 May 2025 00:44:49 +0000 (0:00:00.603) 0:00:35.168 ********** 2025-05-17 00:44:49.691327 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:49.691435 | orchestrator | 2025-05-17 00:44:49.691739 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-17 00:44:49.692378 | orchestrator | Saturday 17 May 2025 00:44:49 +0000 (0:00:00.141) 0:00:35.309 ********** 2025-05-17 00:44:49.890558 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93bb0954-6685-5c67-a7e0-a3574f092206'}}) 2025-05-17 00:44:49.890769 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e21dde7b-e402-5316-8511-fd8df0cc7e38'}}) 2025-05-17 00:44:49.891400 | orchestrator | 2025-05-17 00:44:49.892054 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-17 00:44:49.892700 | orchestrator | Saturday 17 May 2025 00:44:49 +0000 (0:00:00.198) 0:00:35.508 ********** 2025-05-17 00:44:51.692277 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'}) 2025-05-17 00:44:51.693119 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'}) 2025-05-17 00:44:51.693151 | orchestrator | 2025-05-17 00:44:51.693398 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-17 00:44:51.694155 | orchestrator | Saturday 17 May 2025 00:44:51 +0000 (0:00:01.799) 0:00:37.307 ********** 2025-05-17 00:44:51.854864 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:44:51.855779 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:44:51.856838 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:51.860708 | orchestrator | 2025-05-17 00:44:51.860742 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-17 00:44:51.860755 | orchestrator | Saturday 17 May 2025 00:44:51 +0000 (0:00:00.164) 0:00:37.472 ********** 2025-05-17 00:44:53.167059 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'}) 2025-05-17 00:44:53.167253 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'}) 2025-05-17 00:44:53.168606 | orchestrator | 2025-05-17 00:44:53.170452 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-17 00:44:53.170518 | orchestrator | Saturday 17 May 2025 00:44:53 +0000 (0:00:01.310) 0:00:38.783 ********** 2025-05-17 00:44:53.324442 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:44:53.325250 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:44:53.328686 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:53.328735 | orchestrator | 2025-05-17 00:44:53.328749 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-17 00:44:53.328762 | orchestrator | Saturday 17 May 2025 00:44:53 +0000 (0:00:00.157) 0:00:38.940 ********** 2025-05-17 00:44:53.473256 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:53.474479 | orchestrator | 2025-05-17 00:44:53.475738 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-17 00:44:53.476638 | orchestrator | Saturday 17 May 2025 00:44:53 +0000 (0:00:00.149) 0:00:39.090 ********** 2025-05-17 00:44:53.644185 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:44:53.644619 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:44:53.648739 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:53.648826 | orchestrator | 2025-05-17 00:44:53.648951 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-17 00:44:53.651323 | orchestrator | Saturday 17 May 2025 00:44:53 +0000 (0:00:00.170) 0:00:39.261 ********** 2025-05-17 00:44:53.960968 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:53.962180 | orchestrator | 2025-05-17 00:44:53.963135 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-17 00:44:53.966504 | orchestrator | Saturday 17 May 2025 00:44:53 +0000 (0:00:00.317) 0:00:39.578 ********** 2025-05-17 00:44:54.115039 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:44:54.115662 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:44:54.117416 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:54.120462 | orchestrator | 2025-05-17 00:44:54.121352 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-17 00:44:54.122093 | orchestrator | Saturday 17 May 2025 00:44:54 +0000 (0:00:00.153) 0:00:39.732 ********** 2025-05-17 00:44:54.250755 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:54.250982 | orchestrator | 2025-05-17 00:44:54.254885 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-17 00:44:54.255837 | orchestrator | Saturday 17 May 2025 00:44:54 +0000 (0:00:00.134) 0:00:39.866 ********** 2025-05-17 00:44:54.401568 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:44:54.402197 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:44:54.406559 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:54.407606 | orchestrator | 2025-05-17 00:44:54.408670 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-17 00:44:54.409248 | orchestrator | Saturday 17 May 2025 00:44:54 +0000 (0:00:00.152) 0:00:40.018 ********** 2025-05-17 00:44:54.538115 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:44:54.538667 | orchestrator | 2025-05-17 00:44:54.540165 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-17 00:44:54.540659 | orchestrator | Saturday 17 May 2025 00:44:54 +0000 (0:00:00.136) 0:00:40.155 ********** 2025-05-17 00:44:54.702566 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:44:54.703283 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:44:54.705895 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:54.706389 | orchestrator | 2025-05-17 00:44:54.707498 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-17 00:44:54.708798 | orchestrator | Saturday 17 May 2025 00:44:54 +0000 (0:00:00.164) 0:00:40.319 ********** 2025-05-17 00:44:54.868157 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:44:54.870062 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:44:54.870341 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:54.871694 | orchestrator | 2025-05-17 00:44:54.873320 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-17 00:44:54.874661 | orchestrator | Saturday 17 May 2025 00:44:54 +0000 (0:00:00.165) 0:00:40.484 ********** 2025-05-17 00:44:55.026662 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:44:55.027349 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:44:55.027823 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:55.028835 | orchestrator | 2025-05-17 00:44:55.029737 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-17 00:44:55.030115 | orchestrator | Saturday 17 May 2025 00:44:55 +0000 (0:00:00.158) 0:00:40.643 ********** 2025-05-17 00:44:55.160058 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:55.160424 | orchestrator | 2025-05-17 00:44:55.161821 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-17 00:44:55.163057 | orchestrator | Saturday 17 May 2025 00:44:55 +0000 (0:00:00.133) 0:00:40.776 ********** 2025-05-17 00:44:55.315497 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:55.316365 | orchestrator | 2025-05-17 00:44:55.318765 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-17 00:44:55.319174 | orchestrator | Saturday 17 May 2025 00:44:55 +0000 (0:00:00.155) 0:00:40.931 ********** 2025-05-17 00:44:55.447552 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:55.448760 | orchestrator | 2025-05-17 00:44:55.449772 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-17 00:44:55.451077 | orchestrator | Saturday 17 May 2025 00:44:55 +0000 (0:00:00.132) 0:00:41.064 ********** 2025-05-17 00:44:55.598291 | orchestrator | ok: [testbed-node-4] => { 2025-05-17 00:44:55.598763 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-17 00:44:55.600648 | orchestrator | } 2025-05-17 00:44:55.601582 | orchestrator | 2025-05-17 00:44:55.602726 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-17 00:44:55.604232 | orchestrator | Saturday 17 May 2025 00:44:55 +0000 (0:00:00.151) 0:00:41.215 ********** 2025-05-17 00:44:55.959256 | orchestrator | ok: [testbed-node-4] => { 2025-05-17 00:44:55.960187 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-17 00:44:55.962120 | orchestrator | } 2025-05-17 00:44:55.963689 | orchestrator | 2025-05-17 00:44:55.964143 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-17 00:44:55.965679 | orchestrator | Saturday 17 May 2025 00:44:55 +0000 (0:00:00.361) 0:00:41.577 ********** 2025-05-17 00:44:56.106522 | orchestrator | ok: [testbed-node-4] => { 2025-05-17 00:44:56.107426 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-17 00:44:56.108689 | orchestrator | } 2025-05-17 00:44:56.109689 | orchestrator | 2025-05-17 00:44:56.110886 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-17 00:44:56.111595 | orchestrator | Saturday 17 May 2025 00:44:56 +0000 (0:00:00.143) 0:00:41.721 ********** 2025-05-17 00:44:56.601989 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:44:56.603127 | orchestrator | 2025-05-17 00:44:56.604328 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-17 00:44:56.606526 | orchestrator | Saturday 17 May 2025 00:44:56 +0000 (0:00:00.496) 0:00:42.217 ********** 2025-05-17 00:44:57.076534 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:44:57.076710 | orchestrator | 2025-05-17 00:44:57.078554 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-17 00:44:57.078967 | orchestrator | Saturday 17 May 2025 00:44:57 +0000 (0:00:00.476) 0:00:42.694 ********** 2025-05-17 00:44:57.573762 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:44:57.574412 | orchestrator | 2025-05-17 00:44:57.575330 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-17 00:44:57.579097 | orchestrator | Saturday 17 May 2025 00:44:57 +0000 (0:00:00.496) 0:00:43.190 ********** 2025-05-17 00:44:57.716552 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:44:57.717179 | orchestrator | 2025-05-17 00:44:57.718128 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-17 00:44:57.721098 | orchestrator | Saturday 17 May 2025 00:44:57 +0000 (0:00:00.143) 0:00:43.333 ********** 2025-05-17 00:44:57.827381 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:57.828965 | orchestrator | 2025-05-17 00:44:57.832100 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-17 00:44:57.833546 | orchestrator | Saturday 17 May 2025 00:44:57 +0000 (0:00:00.110) 0:00:43.444 ********** 2025-05-17 00:44:57.940862 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:57.942294 | orchestrator | 2025-05-17 00:44:57.943579 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-17 00:44:57.944497 | orchestrator | Saturday 17 May 2025 00:44:57 +0000 (0:00:00.114) 0:00:43.559 ********** 2025-05-17 00:44:58.095568 | orchestrator | ok: [testbed-node-4] => { 2025-05-17 00:44:58.095984 | orchestrator |  "vgs_report": { 2025-05-17 00:44:58.096452 | orchestrator |  "vg": [] 2025-05-17 00:44:58.097476 | orchestrator |  } 2025-05-17 00:44:58.098002 | orchestrator | } 2025-05-17 00:44:58.098650 | orchestrator | 2025-05-17 00:44:58.099115 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-17 00:44:58.099526 | orchestrator | Saturday 17 May 2025 00:44:58 +0000 (0:00:00.154) 0:00:43.713 ********** 2025-05-17 00:44:58.219075 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:58.219172 | orchestrator | 2025-05-17 00:44:58.219697 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-17 00:44:58.220166 | orchestrator | Saturday 17 May 2025 00:44:58 +0000 (0:00:00.121) 0:00:43.835 ********** 2025-05-17 00:44:58.363163 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:58.366195 | orchestrator | 2025-05-17 00:44:58.366239 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-17 00:44:58.366978 | orchestrator | Saturday 17 May 2025 00:44:58 +0000 (0:00:00.143) 0:00:43.978 ********** 2025-05-17 00:44:58.676370 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:58.679240 | orchestrator | 2025-05-17 00:44:58.679295 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-17 00:44:58.682954 | orchestrator | Saturday 17 May 2025 00:44:58 +0000 (0:00:00.313) 0:00:44.292 ********** 2025-05-17 00:44:58.821202 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:58.822468 | orchestrator | 2025-05-17 00:44:58.823930 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-17 00:44:58.824772 | orchestrator | Saturday 17 May 2025 00:44:58 +0000 (0:00:00.146) 0:00:44.439 ********** 2025-05-17 00:44:58.964814 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:58.966623 | orchestrator | 2025-05-17 00:44:58.967602 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-17 00:44:58.968320 | orchestrator | Saturday 17 May 2025 00:44:58 +0000 (0:00:00.144) 0:00:44.583 ********** 2025-05-17 00:44:59.103388 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:59.105433 | orchestrator | 2025-05-17 00:44:59.106127 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-17 00:44:59.107448 | orchestrator | Saturday 17 May 2025 00:44:59 +0000 (0:00:00.137) 0:00:44.720 ********** 2025-05-17 00:44:59.237853 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:59.239251 | orchestrator | 2025-05-17 00:44:59.240307 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-17 00:44:59.241802 | orchestrator | Saturday 17 May 2025 00:44:59 +0000 (0:00:00.134) 0:00:44.855 ********** 2025-05-17 00:44:59.378719 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:59.380753 | orchestrator | 2025-05-17 00:44:59.381419 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-17 00:44:59.382586 | orchestrator | Saturday 17 May 2025 00:44:59 +0000 (0:00:00.139) 0:00:44.995 ********** 2025-05-17 00:44:59.517503 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:59.519523 | orchestrator | 2025-05-17 00:44:59.520406 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-17 00:44:59.521447 | orchestrator | Saturday 17 May 2025 00:44:59 +0000 (0:00:00.139) 0:00:45.134 ********** 2025-05-17 00:44:59.675624 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:59.676676 | orchestrator | 2025-05-17 00:44:59.678137 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-17 00:44:59.678648 | orchestrator | Saturday 17 May 2025 00:44:59 +0000 (0:00:00.155) 0:00:45.290 ********** 2025-05-17 00:44:59.799814 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:59.801051 | orchestrator | 2025-05-17 00:44:59.802699 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-17 00:44:59.805075 | orchestrator | Saturday 17 May 2025 00:44:59 +0000 (0:00:00.127) 0:00:45.417 ********** 2025-05-17 00:44:59.939806 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:44:59.940649 | orchestrator | 2025-05-17 00:44:59.941453 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-17 00:44:59.942543 | orchestrator | Saturday 17 May 2025 00:44:59 +0000 (0:00:00.139) 0:00:45.556 ********** 2025-05-17 00:45:00.076642 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:45:00.077709 | orchestrator | 2025-05-17 00:45:00.079364 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-17 00:45:00.080161 | orchestrator | Saturday 17 May 2025 00:45:00 +0000 (0:00:00.135) 0:00:45.692 ********** 2025-05-17 00:45:00.205284 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:45:00.206567 | orchestrator | 2025-05-17 00:45:00.207669 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-17 00:45:00.208618 | orchestrator | Saturday 17 May 2025 00:45:00 +0000 (0:00:00.129) 0:00:45.821 ********** 2025-05-17 00:45:00.577988 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:45:00.579059 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:45:00.580133 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:45:00.582843 | orchestrator | 2025-05-17 00:45:00.585020 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-17 00:45:00.586157 | orchestrator | Saturday 17 May 2025 00:45:00 +0000 (0:00:00.372) 0:00:46.194 ********** 2025-05-17 00:45:00.744079 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:45:00.744446 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:45:00.746321 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:45:00.747891 | orchestrator | 2025-05-17 00:45:00.749680 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-17 00:45:00.750403 | orchestrator | Saturday 17 May 2025 00:45:00 +0000 (0:00:00.165) 0:00:46.359 ********** 2025-05-17 00:45:00.909730 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:45:00.912122 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:45:00.913113 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:45:00.914150 | orchestrator | 2025-05-17 00:45:00.915186 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-17 00:45:00.915343 | orchestrator | Saturday 17 May 2025 00:45:00 +0000 (0:00:00.167) 0:00:46.527 ********** 2025-05-17 00:45:01.076585 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:45:01.081095 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:45:01.081168 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:45:01.081195 | orchestrator | 2025-05-17 00:45:01.081237 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-17 00:45:01.081395 | orchestrator | Saturday 17 May 2025 00:45:01 +0000 (0:00:00.167) 0:00:46.694 ********** 2025-05-17 00:45:01.260008 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:45:01.260990 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:45:01.262197 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:45:01.263432 | orchestrator | 2025-05-17 00:45:01.268100 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-17 00:45:01.271294 | orchestrator | Saturday 17 May 2025 00:45:01 +0000 (0:00:00.182) 0:00:46.877 ********** 2025-05-17 00:45:01.424083 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:45:01.426188 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:45:01.427596 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:45:01.428598 | orchestrator | 2025-05-17 00:45:01.429637 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-17 00:45:01.430420 | orchestrator | Saturday 17 May 2025 00:45:01 +0000 (0:00:00.161) 0:00:47.039 ********** 2025-05-17 00:45:01.598261 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:45:01.599133 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:45:01.600787 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:45:01.605148 | orchestrator | 2025-05-17 00:45:01.605218 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-17 00:45:01.605843 | orchestrator | Saturday 17 May 2025 00:45:01 +0000 (0:00:00.176) 0:00:47.215 ********** 2025-05-17 00:45:01.755897 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:45:01.756153 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:45:01.756410 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:45:01.756963 | orchestrator | 2025-05-17 00:45:01.757297 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-17 00:45:01.758099 | orchestrator | Saturday 17 May 2025 00:45:01 +0000 (0:00:00.159) 0:00:47.374 ********** 2025-05-17 00:45:02.289012 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:45:02.289133 | orchestrator | 2025-05-17 00:45:02.289772 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-17 00:45:02.290110 | orchestrator | Saturday 17 May 2025 00:45:02 +0000 (0:00:00.531) 0:00:47.906 ********** 2025-05-17 00:45:02.795616 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:45:02.795787 | orchestrator | 2025-05-17 00:45:02.796590 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-17 00:45:02.798160 | orchestrator | Saturday 17 May 2025 00:45:02 +0000 (0:00:00.506) 0:00:48.412 ********** 2025-05-17 00:45:03.140743 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:45:03.140951 | orchestrator | 2025-05-17 00:45:03.143595 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-17 00:45:03.143623 | orchestrator | Saturday 17 May 2025 00:45:03 +0000 (0:00:00.344) 0:00:48.757 ********** 2025-05-17 00:45:03.316235 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'vg_name': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'}) 2025-05-17 00:45:03.316929 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'vg_name': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'}) 2025-05-17 00:45:03.318227 | orchestrator | 2025-05-17 00:45:03.318834 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-17 00:45:03.320780 | orchestrator | Saturday 17 May 2025 00:45:03 +0000 (0:00:00.177) 0:00:48.934 ********** 2025-05-17 00:45:03.512316 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:45:03.512465 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:45:03.513648 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:45:03.514599 | orchestrator | 2025-05-17 00:45:03.515652 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-17 00:45:03.516387 | orchestrator | Saturday 17 May 2025 00:45:03 +0000 (0:00:00.192) 0:00:49.127 ********** 2025-05-17 00:45:03.692150 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:45:03.692531 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:45:03.692946 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:45:03.693774 | orchestrator | 2025-05-17 00:45:03.694352 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-17 00:45:03.695270 | orchestrator | Saturday 17 May 2025 00:45:03 +0000 (0:00:00.183) 0:00:49.310 ********** 2025-05-17 00:45:03.862992 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'})  2025-05-17 00:45:03.863105 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'})  2025-05-17 00:45:03.863120 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:45:03.863225 | orchestrator | 2025-05-17 00:45:03.863244 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-17 00:45:03.863632 | orchestrator | Saturday 17 May 2025 00:45:03 +0000 (0:00:00.168) 0:00:49.479 ********** 2025-05-17 00:45:04.729712 | orchestrator | ok: [testbed-node-4] => { 2025-05-17 00:45:04.731187 | orchestrator |  "lvm_report": { 2025-05-17 00:45:04.731429 | orchestrator |  "lv": [ 2025-05-17 00:45:04.732411 | orchestrator |  { 2025-05-17 00:45:04.732681 | orchestrator |  "lv_name": "osd-block-93bb0954-6685-5c67-a7e0-a3574f092206", 2025-05-17 00:45:04.733696 | orchestrator |  "vg_name": "ceph-93bb0954-6685-5c67-a7e0-a3574f092206" 2025-05-17 00:45:04.735125 | orchestrator |  }, 2025-05-17 00:45:04.736339 | orchestrator |  { 2025-05-17 00:45:04.736777 | orchestrator |  "lv_name": "osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38", 2025-05-17 00:45:04.737749 | orchestrator |  "vg_name": "ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38" 2025-05-17 00:45:04.737846 | orchestrator |  } 2025-05-17 00:45:04.738801 | orchestrator |  ], 2025-05-17 00:45:04.739076 | orchestrator |  "pv": [ 2025-05-17 00:45:04.739879 | orchestrator |  { 2025-05-17 00:45:04.740434 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-17 00:45:04.741157 | orchestrator |  "vg_name": "ceph-93bb0954-6685-5c67-a7e0-a3574f092206" 2025-05-17 00:45:04.741538 | orchestrator |  }, 2025-05-17 00:45:04.741821 | orchestrator |  { 2025-05-17 00:45:04.742626 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-17 00:45:04.743247 | orchestrator |  "vg_name": "ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38" 2025-05-17 00:45:04.743598 | orchestrator |  } 2025-05-17 00:45:04.744288 | orchestrator |  ] 2025-05-17 00:45:04.744312 | orchestrator |  } 2025-05-17 00:45:04.744797 | orchestrator | } 2025-05-17 00:45:04.745358 | orchestrator | 2025-05-17 00:45:04.745646 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-17 00:45:04.746115 | orchestrator | 2025-05-17 00:45:04.746297 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-17 00:45:04.746778 | orchestrator | Saturday 17 May 2025 00:45:04 +0000 (0:00:00.867) 0:00:50.346 ********** 2025-05-17 00:45:04.988566 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-17 00:45:04.988672 | orchestrator | 2025-05-17 00:45:04.991198 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-17 00:45:04.992185 | orchestrator | Saturday 17 May 2025 00:45:04 +0000 (0:00:00.257) 0:00:50.604 ********** 2025-05-17 00:45:05.210308 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:45:05.210465 | orchestrator | 2025-05-17 00:45:05.211258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:45:05.211883 | orchestrator | Saturday 17 May 2025 00:45:05 +0000 (0:00:00.223) 0:00:50.827 ********** 2025-05-17 00:45:05.681016 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-17 00:45:05.682121 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-17 00:45:05.683409 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-17 00:45:05.684538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-17 00:45:05.685535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-17 00:45:05.686502 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-17 00:45:05.687141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-17 00:45:05.687721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-17 00:45:05.688358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-17 00:45:05.688754 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-17 00:45:05.689369 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-17 00:45:05.690450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-17 00:45:05.690851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-17 00:45:05.691272 | orchestrator | 2025-05-17 00:45:05.691661 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:45:05.692158 | orchestrator | Saturday 17 May 2025 00:45:05 +0000 (0:00:00.469) 0:00:51.297 ********** 2025-05-17 00:45:05.865300 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:05.865377 | orchestrator | 2025-05-17 00:45:05.865431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:45:05.865882 | orchestrator | Saturday 17 May 2025 00:45:05 +0000 (0:00:00.186) 0:00:51.483 ********** 2025-05-17 00:45:06.081263 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:06.081415 | orchestrator | 2025-05-17 00:45:06.081433 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:45:06.082606 | orchestrator | Saturday 17 May 2025 00:45:06 +0000 (0:00:00.213) 0:00:51.697 ********** 2025-05-17 00:45:06.287364 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:06.288181 | orchestrator | 2025-05-17 00:45:06.289043 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:45:06.291696 | orchestrator | Saturday 17 May 2025 00:45:06 +0000 (0:00:00.208) 0:00:51.905 ********** 2025-05-17 00:45:06.520446 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:06.521384 | orchestrator | 2025-05-17 00:45:06.521652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:45:06.522396 | orchestrator | Saturday 17 May 2025 00:45:06 +0000 (0:00:00.231) 0:00:52.137 ********** 2025-05-17 00:45:06.714634 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:06.717299 | orchestrator | 2025-05-17 00:45:06.718577 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:45:06.719671 | orchestrator | Saturday 17 May 2025 00:45:06 +0000 (0:00:00.195) 0:00:52.332 ********** 2025-05-17 00:45:07.271246 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:07.272094 | orchestrator | 2025-05-17 00:45:07.272835 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:45:07.275235 | orchestrator | Saturday 17 May 2025 00:45:07 +0000 (0:00:00.556) 0:00:52.888 ********** 2025-05-17 00:45:07.465205 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:07.465938 | orchestrator | 2025-05-17 00:45:07.467015 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:45:07.467978 | orchestrator | Saturday 17 May 2025 00:45:07 +0000 (0:00:00.192) 0:00:53.081 ********** 2025-05-17 00:45:07.663617 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:07.664107 | orchestrator | 2025-05-17 00:45:07.665091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:45:07.665860 | orchestrator | Saturday 17 May 2025 00:45:07 +0000 (0:00:00.199) 0:00:53.281 ********** 2025-05-17 00:45:08.087229 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50) 2025-05-17 00:45:08.088513 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50) 2025-05-17 00:45:08.089404 | orchestrator | 2025-05-17 00:45:08.089980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:45:08.090990 | orchestrator | Saturday 17 May 2025 00:45:08 +0000 (0:00:00.423) 0:00:53.704 ********** 2025-05-17 00:45:08.517936 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4ddb2821-e209-41e3-b031-9f23c5adf4cf) 2025-05-17 00:45:08.518476 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4ddb2821-e209-41e3-b031-9f23c5adf4cf) 2025-05-17 00:45:08.521486 | orchestrator | 2025-05-17 00:45:08.521511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:45:08.521595 | orchestrator | Saturday 17 May 2025 00:45:08 +0000 (0:00:00.429) 0:00:54.134 ********** 2025-05-17 00:45:08.985529 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8746963d-35d6-4275-a53f-fa471798b09a) 2025-05-17 00:45:08.986623 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8746963d-35d6-4275-a53f-fa471798b09a) 2025-05-17 00:45:08.987030 | orchestrator | 2025-05-17 00:45:08.988776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:45:08.988834 | orchestrator | Saturday 17 May 2025 00:45:08 +0000 (0:00:00.468) 0:00:54.603 ********** 2025-05-17 00:45:09.444997 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c9243530-1d89-4c38-b4ef-a9d7ed453cca) 2025-05-17 00:45:09.445099 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c9243530-1d89-4c38-b4ef-a9d7ed453cca) 2025-05-17 00:45:09.445336 | orchestrator | 2025-05-17 00:45:09.445751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-17 00:45:09.446222 | orchestrator | Saturday 17 May 2025 00:45:09 +0000 (0:00:00.459) 0:00:55.062 ********** 2025-05-17 00:45:09.773185 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-17 00:45:09.774967 | orchestrator | 2025-05-17 00:45:09.775746 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:45:09.776459 | orchestrator | Saturday 17 May 2025 00:45:09 +0000 (0:00:00.325) 0:00:55.388 ********** 2025-05-17 00:45:10.234387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-17 00:45:10.234487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-17 00:45:10.236086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-17 00:45:10.237406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-17 00:45:10.238950 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-17 00:45:10.239897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-17 00:45:10.240690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-17 00:45:10.241217 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-17 00:45:10.241555 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-17 00:45:10.242069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-17 00:45:10.242860 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-17 00:45:10.243173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-17 00:45:10.243488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-17 00:45:10.243821 | orchestrator | 2025-05-17 00:45:10.244363 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:45:10.244636 | orchestrator | Saturday 17 May 2025 00:45:10 +0000 (0:00:00.462) 0:00:55.850 ********** 2025-05-17 00:45:10.805789 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:10.806091 | orchestrator | 2025-05-17 00:45:10.807105 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:45:10.808473 | orchestrator | Saturday 17 May 2025 00:45:10 +0000 (0:00:00.571) 0:00:56.421 ********** 2025-05-17 00:45:11.009430 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:11.009775 | orchestrator | 2025-05-17 00:45:11.010804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:45:11.013203 | orchestrator | Saturday 17 May 2025 00:45:11 +0000 (0:00:00.204) 0:00:56.626 ********** 2025-05-17 00:45:11.208807 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:11.208906 | orchestrator | 2025-05-17 00:45:11.209102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:45:11.210159 | orchestrator | Saturday 17 May 2025 00:45:11 +0000 (0:00:00.200) 0:00:56.827 ********** 2025-05-17 00:45:11.411114 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:11.411215 | orchestrator | 2025-05-17 00:45:11.411772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:45:11.412683 | orchestrator | Saturday 17 May 2025 00:45:11 +0000 (0:00:00.199) 0:00:57.027 ********** 2025-05-17 00:45:11.609988 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:11.610788 | orchestrator | 2025-05-17 00:45:11.614305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:45:11.614961 | orchestrator | Saturday 17 May 2025 00:45:11 +0000 (0:00:00.199) 0:00:57.226 ********** 2025-05-17 00:45:11.811547 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:11.812726 | orchestrator | 2025-05-17 00:45:11.812816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:45:11.813740 | orchestrator | Saturday 17 May 2025 00:45:11 +0000 (0:00:00.202) 0:00:57.429 ********** 2025-05-17 00:45:12.008361 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:12.008545 | orchestrator | 2025-05-17 00:45:12.009875 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:45:12.010785 | orchestrator | Saturday 17 May 2025 00:45:12 +0000 (0:00:00.196) 0:00:57.625 ********** 2025-05-17 00:45:12.211605 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:12.212625 | orchestrator | 2025-05-17 00:45:12.214177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:45:12.214583 | orchestrator | Saturday 17 May 2025 00:45:12 +0000 (0:00:00.202) 0:00:57.827 ********** 2025-05-17 00:45:13.051187 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-17 00:45:13.052390 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-17 00:45:13.053415 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-17 00:45:13.054078 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-17 00:45:13.055130 | orchestrator | 2025-05-17 00:45:13.055728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:45:13.056532 | orchestrator | Saturday 17 May 2025 00:45:13 +0000 (0:00:00.839) 0:00:58.667 ********** 2025-05-17 00:45:13.247968 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:13.248569 | orchestrator | 2025-05-17 00:45:13.249385 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:45:13.250200 | orchestrator | Saturday 17 May 2025 00:45:13 +0000 (0:00:00.194) 0:00:58.862 ********** 2025-05-17 00:45:13.859381 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:13.861279 | orchestrator | 2025-05-17 00:45:13.861316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:45:13.861331 | orchestrator | Saturday 17 May 2025 00:45:13 +0000 (0:00:00.611) 0:00:59.474 ********** 2025-05-17 00:45:14.076487 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:14.076655 | orchestrator | 2025-05-17 00:45:14.077500 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-17 00:45:14.078110 | orchestrator | Saturday 17 May 2025 00:45:14 +0000 (0:00:00.219) 0:00:59.693 ********** 2025-05-17 00:45:14.295758 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:14.295962 | orchestrator | 2025-05-17 00:45:14.295986 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-17 00:45:14.296068 | orchestrator | Saturday 17 May 2025 00:45:14 +0000 (0:00:00.217) 0:00:59.911 ********** 2025-05-17 00:45:14.465987 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:14.468623 | orchestrator | 2025-05-17 00:45:14.469328 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-17 00:45:14.470074 | orchestrator | Saturday 17 May 2025 00:45:14 +0000 (0:00:00.173) 0:01:00.084 ********** 2025-05-17 00:45:14.670383 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a83a275b-240b-53eb-892d-9c3e23ab252d'}}) 2025-05-17 00:45:14.672348 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b4d5f2e3-0e32-57e8-8b55-58d04db15593'}}) 2025-05-17 00:45:14.672931 | orchestrator | 2025-05-17 00:45:14.673397 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-17 00:45:14.673805 | orchestrator | Saturday 17 May 2025 00:45:14 +0000 (0:00:00.200) 0:01:00.285 ********** 2025-05-17 00:45:16.457478 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'}) 2025-05-17 00:45:16.457724 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'}) 2025-05-17 00:45:16.459192 | orchestrator | 2025-05-17 00:45:16.460130 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-17 00:45:16.460727 | orchestrator | Saturday 17 May 2025 00:45:16 +0000 (0:00:01.788) 0:01:02.073 ********** 2025-05-17 00:45:16.621016 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:16.621444 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:16.622594 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:16.623401 | orchestrator | 2025-05-17 00:45:16.624517 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-17 00:45:16.625421 | orchestrator | Saturday 17 May 2025 00:45:16 +0000 (0:00:00.164) 0:01:02.237 ********** 2025-05-17 00:45:17.993114 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'}) 2025-05-17 00:45:17.993308 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'}) 2025-05-17 00:45:17.994473 | orchestrator | 2025-05-17 00:45:17.995314 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-17 00:45:17.996173 | orchestrator | Saturday 17 May 2025 00:45:17 +0000 (0:00:01.371) 0:01:03.609 ********** 2025-05-17 00:45:18.151483 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:18.151817 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:18.153083 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:18.154568 | orchestrator | 2025-05-17 00:45:18.155256 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-17 00:45:18.156405 | orchestrator | Saturday 17 May 2025 00:45:18 +0000 (0:00:00.159) 0:01:03.769 ********** 2025-05-17 00:45:18.470290 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:18.470428 | orchestrator | 2025-05-17 00:45:18.470458 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-17 00:45:18.470574 | orchestrator | Saturday 17 May 2025 00:45:18 +0000 (0:00:00.317) 0:01:04.086 ********** 2025-05-17 00:45:18.635067 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:18.635169 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:18.635996 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:18.636594 | orchestrator | 2025-05-17 00:45:18.637356 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-17 00:45:18.639988 | orchestrator | Saturday 17 May 2025 00:45:18 +0000 (0:00:00.165) 0:01:04.251 ********** 2025-05-17 00:45:18.777829 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:18.777989 | orchestrator | 2025-05-17 00:45:18.778012 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-17 00:45:18.778588 | orchestrator | Saturday 17 May 2025 00:45:18 +0000 (0:00:00.143) 0:01:04.395 ********** 2025-05-17 00:45:18.946772 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:18.948710 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:18.948744 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:18.948758 | orchestrator | 2025-05-17 00:45:18.951749 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-17 00:45:18.953656 | orchestrator | Saturday 17 May 2025 00:45:18 +0000 (0:00:00.168) 0:01:04.564 ********** 2025-05-17 00:45:19.085979 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:19.086649 | orchestrator | 2025-05-17 00:45:19.086997 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-17 00:45:19.087956 | orchestrator | Saturday 17 May 2025 00:45:19 +0000 (0:00:00.139) 0:01:04.703 ********** 2025-05-17 00:45:19.235466 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:19.236542 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:19.237439 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:19.238485 | orchestrator | 2025-05-17 00:45:19.239734 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-17 00:45:19.240075 | orchestrator | Saturday 17 May 2025 00:45:19 +0000 (0:00:00.149) 0:01:04.853 ********** 2025-05-17 00:45:19.383331 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:45:19.384293 | orchestrator | 2025-05-17 00:45:19.384582 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-17 00:45:19.385516 | orchestrator | Saturday 17 May 2025 00:45:19 +0000 (0:00:00.147) 0:01:05.001 ********** 2025-05-17 00:45:19.559657 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:19.559890 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:19.561149 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:19.562940 | orchestrator | 2025-05-17 00:45:19.565210 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-17 00:45:19.565494 | orchestrator | Saturday 17 May 2025 00:45:19 +0000 (0:00:00.174) 0:01:05.176 ********** 2025-05-17 00:45:19.721573 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:19.721663 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:19.721772 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:19.722645 | orchestrator | 2025-05-17 00:45:19.725182 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-17 00:45:19.725511 | orchestrator | Saturday 17 May 2025 00:45:19 +0000 (0:00:00.162) 0:01:05.338 ********** 2025-05-17 00:45:19.873315 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:19.873743 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:19.875398 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:19.876572 | orchestrator | 2025-05-17 00:45:19.878696 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-17 00:45:19.878963 | orchestrator | Saturday 17 May 2025 00:45:19 +0000 (0:00:00.152) 0:01:05.490 ********** 2025-05-17 00:45:20.014638 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:20.015719 | orchestrator | 2025-05-17 00:45:20.016837 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-17 00:45:20.017823 | orchestrator | Saturday 17 May 2025 00:45:20 +0000 (0:00:00.142) 0:01:05.632 ********** 2025-05-17 00:45:20.155541 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:20.155701 | orchestrator | 2025-05-17 00:45:20.156874 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-17 00:45:20.157932 | orchestrator | Saturday 17 May 2025 00:45:20 +0000 (0:00:00.140) 0:01:05.773 ********** 2025-05-17 00:45:20.490698 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:20.492130 | orchestrator | 2025-05-17 00:45:20.493321 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-17 00:45:20.494507 | orchestrator | Saturday 17 May 2025 00:45:20 +0000 (0:00:00.335) 0:01:06.108 ********** 2025-05-17 00:45:20.636728 | orchestrator | ok: [testbed-node-5] => { 2025-05-17 00:45:20.638165 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-17 00:45:20.639482 | orchestrator | } 2025-05-17 00:45:20.641047 | orchestrator | 2025-05-17 00:45:20.642732 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-17 00:45:20.643573 | orchestrator | Saturday 17 May 2025 00:45:20 +0000 (0:00:00.146) 0:01:06.254 ********** 2025-05-17 00:45:20.784229 | orchestrator | ok: [testbed-node-5] => { 2025-05-17 00:45:20.784404 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-17 00:45:20.785231 | orchestrator | } 2025-05-17 00:45:20.786215 | orchestrator | 2025-05-17 00:45:20.787364 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-17 00:45:20.788161 | orchestrator | Saturday 17 May 2025 00:45:20 +0000 (0:00:00.144) 0:01:06.399 ********** 2025-05-17 00:45:20.917443 | orchestrator | ok: [testbed-node-5] => { 2025-05-17 00:45:20.918484 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-17 00:45:20.919745 | orchestrator | } 2025-05-17 00:45:20.920884 | orchestrator | 2025-05-17 00:45:20.921247 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-17 00:45:20.922072 | orchestrator | Saturday 17 May 2025 00:45:20 +0000 (0:00:00.133) 0:01:06.533 ********** 2025-05-17 00:45:21.445755 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:45:21.446164 | orchestrator | 2025-05-17 00:45:21.447104 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-17 00:45:21.448897 | orchestrator | Saturday 17 May 2025 00:45:21 +0000 (0:00:00.529) 0:01:07.063 ********** 2025-05-17 00:45:21.964099 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:45:21.964248 | orchestrator | 2025-05-17 00:45:21.964267 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-17 00:45:21.964345 | orchestrator | Saturday 17 May 2025 00:45:21 +0000 (0:00:00.517) 0:01:07.581 ********** 2025-05-17 00:45:22.473984 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:45:22.474233 | orchestrator | 2025-05-17 00:45:22.475155 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-17 00:45:22.476102 | orchestrator | Saturday 17 May 2025 00:45:22 +0000 (0:00:00.510) 0:01:08.091 ********** 2025-05-17 00:45:22.615698 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:45:22.615796 | orchestrator | 2025-05-17 00:45:22.616375 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-17 00:45:22.617178 | orchestrator | Saturday 17 May 2025 00:45:22 +0000 (0:00:00.141) 0:01:08.233 ********** 2025-05-17 00:45:22.725614 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:22.726619 | orchestrator | 2025-05-17 00:45:22.727351 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-17 00:45:22.733437 | orchestrator | Saturday 17 May 2025 00:45:22 +0000 (0:00:00.109) 0:01:08.342 ********** 2025-05-17 00:45:22.841530 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:22.841654 | orchestrator | 2025-05-17 00:45:22.841730 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-17 00:45:22.842401 | orchestrator | Saturday 17 May 2025 00:45:22 +0000 (0:00:00.116) 0:01:08.459 ********** 2025-05-17 00:45:22.984092 | orchestrator | ok: [testbed-node-5] => { 2025-05-17 00:45:22.984181 | orchestrator |  "vgs_report": { 2025-05-17 00:45:22.984578 | orchestrator |  "vg": [] 2025-05-17 00:45:22.985263 | orchestrator |  } 2025-05-17 00:45:22.985684 | orchestrator | } 2025-05-17 00:45:22.986539 | orchestrator | 2025-05-17 00:45:22.986888 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-17 00:45:22.989175 | orchestrator | Saturday 17 May 2025 00:45:22 +0000 (0:00:00.142) 0:01:08.602 ********** 2025-05-17 00:45:23.300477 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:23.301557 | orchestrator | 2025-05-17 00:45:23.302278 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-17 00:45:23.303170 | orchestrator | Saturday 17 May 2025 00:45:23 +0000 (0:00:00.316) 0:01:08.918 ********** 2025-05-17 00:45:23.443709 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:23.443892 | orchestrator | 2025-05-17 00:45:23.444843 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-17 00:45:23.445250 | orchestrator | Saturday 17 May 2025 00:45:23 +0000 (0:00:00.143) 0:01:09.061 ********** 2025-05-17 00:45:23.577525 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:23.578193 | orchestrator | 2025-05-17 00:45:23.579632 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-17 00:45:23.580082 | orchestrator | Saturday 17 May 2025 00:45:23 +0000 (0:00:00.133) 0:01:09.195 ********** 2025-05-17 00:45:23.732115 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:23.732269 | orchestrator | 2025-05-17 00:45:23.736544 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-17 00:45:23.737175 | orchestrator | Saturday 17 May 2025 00:45:23 +0000 (0:00:00.153) 0:01:09.348 ********** 2025-05-17 00:45:23.869975 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:23.870798 | orchestrator | 2025-05-17 00:45:23.871907 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-17 00:45:23.874840 | orchestrator | Saturday 17 May 2025 00:45:23 +0000 (0:00:00.139) 0:01:09.488 ********** 2025-05-17 00:45:24.006325 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:24.006710 | orchestrator | 2025-05-17 00:45:24.008016 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-17 00:45:24.008658 | orchestrator | Saturday 17 May 2025 00:45:24 +0000 (0:00:00.136) 0:01:09.624 ********** 2025-05-17 00:45:24.150994 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:24.153392 | orchestrator | 2025-05-17 00:45:24.153729 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-17 00:45:24.154589 | orchestrator | Saturday 17 May 2025 00:45:24 +0000 (0:00:00.144) 0:01:09.768 ********** 2025-05-17 00:45:24.294394 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:24.294500 | orchestrator | 2025-05-17 00:45:24.295116 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-17 00:45:24.295338 | orchestrator | Saturday 17 May 2025 00:45:24 +0000 (0:00:00.143) 0:01:09.912 ********** 2025-05-17 00:45:24.450746 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:24.451257 | orchestrator | 2025-05-17 00:45:24.452621 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-17 00:45:24.453496 | orchestrator | Saturday 17 May 2025 00:45:24 +0000 (0:00:00.154) 0:01:10.066 ********** 2025-05-17 00:45:24.615088 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:24.615308 | orchestrator | 2025-05-17 00:45:24.615665 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-17 00:45:24.616089 | orchestrator | Saturday 17 May 2025 00:45:24 +0000 (0:00:00.166) 0:01:10.232 ********** 2025-05-17 00:45:24.748735 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:24.750662 | orchestrator | 2025-05-17 00:45:24.751247 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-17 00:45:24.752444 | orchestrator | Saturday 17 May 2025 00:45:24 +0000 (0:00:00.132) 0:01:10.365 ********** 2025-05-17 00:45:25.079658 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:25.080185 | orchestrator | 2025-05-17 00:45:25.081454 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-17 00:45:25.082577 | orchestrator | Saturday 17 May 2025 00:45:25 +0000 (0:00:00.331) 0:01:10.696 ********** 2025-05-17 00:45:25.215883 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:25.217977 | orchestrator | 2025-05-17 00:45:25.218570 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-17 00:45:25.219629 | orchestrator | Saturday 17 May 2025 00:45:25 +0000 (0:00:00.136) 0:01:10.833 ********** 2025-05-17 00:45:25.366275 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:25.367591 | orchestrator | 2025-05-17 00:45:25.368599 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-17 00:45:25.369516 | orchestrator | Saturday 17 May 2025 00:45:25 +0000 (0:00:00.150) 0:01:10.983 ********** 2025-05-17 00:45:25.554804 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:25.554904 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:25.555294 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:25.556095 | orchestrator | 2025-05-17 00:45:25.556882 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-17 00:45:25.557514 | orchestrator | Saturday 17 May 2025 00:45:25 +0000 (0:00:00.184) 0:01:11.168 ********** 2025-05-17 00:45:25.719093 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:25.719204 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:25.719353 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:25.719605 | orchestrator | 2025-05-17 00:45:25.721143 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-17 00:45:25.721594 | orchestrator | Saturday 17 May 2025 00:45:25 +0000 (0:00:00.167) 0:01:11.335 ********** 2025-05-17 00:45:25.890827 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:25.890990 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:25.891759 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:25.893320 | orchestrator | 2025-05-17 00:45:25.894400 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-17 00:45:25.895764 | orchestrator | Saturday 17 May 2025 00:45:25 +0000 (0:00:00.171) 0:01:11.507 ********** 2025-05-17 00:45:26.067855 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:26.068897 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:26.069770 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:26.070138 | orchestrator | 2025-05-17 00:45:26.071338 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-17 00:45:26.072360 | orchestrator | Saturday 17 May 2025 00:45:26 +0000 (0:00:00.178) 0:01:11.685 ********** 2025-05-17 00:45:26.227899 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:26.228188 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:26.229647 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:26.230653 | orchestrator | 2025-05-17 00:45:26.231152 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-17 00:45:26.231984 | orchestrator | Saturday 17 May 2025 00:45:26 +0000 (0:00:00.159) 0:01:11.845 ********** 2025-05-17 00:45:26.387778 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:26.388392 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:26.388991 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:26.390428 | orchestrator | 2025-05-17 00:45:26.391471 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-17 00:45:26.392390 | orchestrator | Saturday 17 May 2025 00:45:26 +0000 (0:00:00.160) 0:01:12.005 ********** 2025-05-17 00:45:26.561686 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:26.563671 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:26.565254 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:26.568603 | orchestrator | 2025-05-17 00:45:26.568724 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-17 00:45:26.568742 | orchestrator | Saturday 17 May 2025 00:45:26 +0000 (0:00:00.173) 0:01:12.178 ********** 2025-05-17 00:45:26.721828 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:26.722147 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:26.723397 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:26.724050 | orchestrator | 2025-05-17 00:45:26.724468 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-17 00:45:26.725437 | orchestrator | Saturday 17 May 2025 00:45:26 +0000 (0:00:00.159) 0:01:12.338 ********** 2025-05-17 00:45:27.440678 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:45:27.441492 | orchestrator | 2025-05-17 00:45:27.442187 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-17 00:45:27.443265 | orchestrator | Saturday 17 May 2025 00:45:27 +0000 (0:00:00.718) 0:01:13.057 ********** 2025-05-17 00:45:27.993662 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:45:27.993839 | orchestrator | 2025-05-17 00:45:27.995363 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-17 00:45:27.995666 | orchestrator | Saturday 17 May 2025 00:45:27 +0000 (0:00:00.552) 0:01:13.609 ********** 2025-05-17 00:45:28.149680 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:45:28.149785 | orchestrator | 2025-05-17 00:45:28.152511 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-17 00:45:28.153068 | orchestrator | Saturday 17 May 2025 00:45:28 +0000 (0:00:00.153) 0:01:13.763 ********** 2025-05-17 00:45:28.322812 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'vg_name': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'}) 2025-05-17 00:45:28.323042 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'vg_name': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'}) 2025-05-17 00:45:28.323768 | orchestrator | 2025-05-17 00:45:28.323795 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-17 00:45:28.324122 | orchestrator | Saturday 17 May 2025 00:45:28 +0000 (0:00:00.177) 0:01:13.941 ********** 2025-05-17 00:45:28.511977 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:28.512881 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:28.514253 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:28.516971 | orchestrator | 2025-05-17 00:45:28.517440 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-17 00:45:28.520148 | orchestrator | Saturday 17 May 2025 00:45:28 +0000 (0:00:00.188) 0:01:14.129 ********** 2025-05-17 00:45:28.672207 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:28.672517 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:28.673572 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:28.674201 | orchestrator | 2025-05-17 00:45:28.675048 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-17 00:45:28.675859 | orchestrator | Saturday 17 May 2025 00:45:28 +0000 (0:00:00.159) 0:01:14.289 ********** 2025-05-17 00:45:28.838503 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'})  2025-05-17 00:45:28.842105 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'})  2025-05-17 00:45:28.843138 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:28.843846 | orchestrator | 2025-05-17 00:45:28.844803 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-17 00:45:28.845253 | orchestrator | Saturday 17 May 2025 00:45:28 +0000 (0:00:00.166) 0:01:14.455 ********** 2025-05-17 00:45:29.246851 | orchestrator | ok: [testbed-node-5] => { 2025-05-17 00:45:29.247094 | orchestrator |  "lvm_report": { 2025-05-17 00:45:29.247202 | orchestrator |  "lv": [ 2025-05-17 00:45:29.250234 | orchestrator |  { 2025-05-17 00:45:29.250277 | orchestrator |  "lv_name": "osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d", 2025-05-17 00:45:29.252520 | orchestrator |  "vg_name": "ceph-a83a275b-240b-53eb-892d-9c3e23ab252d" 2025-05-17 00:45:29.253217 | orchestrator |  }, 2025-05-17 00:45:29.254010 | orchestrator |  { 2025-05-17 00:45:29.254792 | orchestrator |  "lv_name": "osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593", 2025-05-17 00:45:29.256091 | orchestrator |  "vg_name": "ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593" 2025-05-17 00:45:29.256574 | orchestrator |  } 2025-05-17 00:45:29.257626 | orchestrator |  ], 2025-05-17 00:45:29.258535 | orchestrator |  "pv": [ 2025-05-17 00:45:29.259250 | orchestrator |  { 2025-05-17 00:45:29.259978 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-17 00:45:29.260789 | orchestrator |  "vg_name": "ceph-a83a275b-240b-53eb-892d-9c3e23ab252d" 2025-05-17 00:45:29.261658 | orchestrator |  }, 2025-05-17 00:45:29.262171 | orchestrator |  { 2025-05-17 00:45:29.263032 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-17 00:45:29.263217 | orchestrator |  "vg_name": "ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593" 2025-05-17 00:45:29.264245 | orchestrator |  } 2025-05-17 00:45:29.264719 | orchestrator |  ] 2025-05-17 00:45:29.265858 | orchestrator |  } 2025-05-17 00:45:29.266759 | orchestrator | } 2025-05-17 00:45:29.267104 | orchestrator | 2025-05-17 00:45:29.268401 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:45:29.268456 | orchestrator | 2025-05-17 00:45:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:45:29.268470 | orchestrator | 2025-05-17 00:45:29 | INFO  | Please wait and do not abort execution. 2025-05-17 00:45:29.268657 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-17 00:45:29.269208 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-17 00:45:29.269615 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-17 00:45:29.270136 | orchestrator | 2025-05-17 00:45:29.270499 | orchestrator | 2025-05-17 00:45:29.270896 | orchestrator | 2025-05-17 00:45:29.271261 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 00:45:29.271664 | orchestrator | Saturday 17 May 2025 00:45:29 +0000 (0:00:00.407) 0:01:14.862 ********** 2025-05-17 00:45:29.272492 | orchestrator | =============================================================================== 2025-05-17 00:45:29.272727 | orchestrator | Create block VGs -------------------------------------------------------- 5.90s 2025-05-17 00:45:29.273214 | orchestrator | Create block LVs -------------------------------------------------------- 4.17s 2025-05-17 00:45:29.273555 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.07s 2025-05-17 00:45:29.273992 | orchestrator | Print LVM report data --------------------------------------------------- 1.94s 2025-05-17 00:45:29.274366 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.79s 2025-05-17 00:45:29.274793 | orchestrator | Add known links to the list of available block devices ------------------ 1.59s 2025-05-17 00:45:29.275117 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2025-05-17 00:45:29.275564 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.55s 2025-05-17 00:45:29.276015 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.52s 2025-05-17 00:45:29.276347 | orchestrator | Add known partitions to the list of available block devices ------------- 1.38s 2025-05-17 00:45:29.276699 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.10s 2025-05-17 00:45:29.277047 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-05-17 00:45:29.277437 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2025-05-17 00:45:29.277731 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.73s 2025-05-17 00:45:29.278129 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.72s 2025-05-17 00:45:29.278515 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.70s 2025-05-17 00:45:29.278746 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2025-05-17 00:45:29.279088 | orchestrator | Combine JSON from _lvs_cmd_output/_pvs_cmd_output ----------------------- 0.66s 2025-05-17 00:45:29.279414 | orchestrator | Print number of OSDs wanted per WAL VG ---------------------------------- 0.65s 2025-05-17 00:45:29.279701 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-05-17 00:45:31.191850 | orchestrator | 2025-05-17 00:45:31 | INFO  | Task 13fc0c3f-adb8-43c5-a1bd-517731bf6397 (facts) was prepared for execution. 2025-05-17 00:45:31.192038 | orchestrator | 2025-05-17 00:45:31 | INFO  | It takes a moment until task 13fc0c3f-adb8-43c5-a1bd-517731bf6397 (facts) has been started and output is visible here. 2025-05-17 00:45:34.272660 | orchestrator | 2025-05-17 00:45:34.276864 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-17 00:45:34.276962 | orchestrator | 2025-05-17 00:45:34.276979 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-17 00:45:34.276991 | orchestrator | Saturday 17 May 2025 00:45:34 +0000 (0:00:00.214) 0:00:00.214 ********** 2025-05-17 00:45:35.334444 | orchestrator | ok: [testbed-manager] 2025-05-17 00:45:35.334578 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:45:35.336071 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:45:35.339443 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:45:35.339474 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:45:35.339486 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:45:35.339497 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:45:35.343512 | orchestrator | 2025-05-17 00:45:35.344901 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-17 00:45:35.345056 | orchestrator | Saturday 17 May 2025 00:45:35 +0000 (0:00:01.061) 0:00:01.275 ********** 2025-05-17 00:45:35.495616 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:45:35.576902 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:45:35.658557 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:45:35.736099 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:45:35.809159 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:45:36.536707 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:45:36.536822 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:36.537218 | orchestrator | 2025-05-17 00:45:36.537246 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-17 00:45:36.537608 | orchestrator | 2025-05-17 00:45:36.538084 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-17 00:45:36.540379 | orchestrator | Saturday 17 May 2025 00:45:36 +0000 (0:00:01.200) 0:00:02.476 ********** 2025-05-17 00:45:41.203398 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:45:41.203571 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:45:41.204444 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:45:41.205062 | orchestrator | ok: [testbed-manager] 2025-05-17 00:45:41.209122 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:45:41.209156 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:45:41.209168 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:45:41.209180 | orchestrator | 2025-05-17 00:45:41.209193 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-17 00:45:41.209205 | orchestrator | 2025-05-17 00:45:41.209217 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-17 00:45:41.210062 | orchestrator | Saturday 17 May 2025 00:45:41 +0000 (0:00:04.671) 0:00:07.148 ********** 2025-05-17 00:45:41.525006 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:45:41.600161 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:45:41.669046 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:45:41.743751 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:45:41.817535 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:45:41.860557 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:45:41.860640 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:45:41.861103 | orchestrator | 2025-05-17 00:45:41.862163 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:45:41.862614 | orchestrator | 2025-05-17 00:45:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-17 00:45:41.862835 | orchestrator | 2025-05-17 00:45:41 | INFO  | Please wait and do not abort execution. 2025-05-17 00:45:41.864379 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:45:41.864987 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:45:41.865796 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:45:41.866844 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:45:41.867132 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:45:41.867546 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:45:41.867888 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:45:41.868311 | orchestrator | 2025-05-17 00:45:41.868853 | orchestrator | Saturday 17 May 2025 00:45:41 +0000 (0:00:00.656) 0:00:07.804 ********** 2025-05-17 00:45:41.869349 | orchestrator | =============================================================================== 2025-05-17 00:45:41.870101 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.67s 2025-05-17 00:45:41.870208 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2025-05-17 00:45:41.870590 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.06s 2025-05-17 00:45:41.871018 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.66s 2025-05-17 00:45:42.357072 | orchestrator | 2025-05-17 00:45:42.360270 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat May 17 00:45:42 UTC 2025 2025-05-17 00:45:42.360322 | orchestrator | 2025-05-17 00:45:43.732429 | orchestrator | 2025-05-17 00:45:43 | INFO  | Collection nutshell is prepared for execution 2025-05-17 00:45:43.732531 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [0] - dotfiles 2025-05-17 00:45:43.737212 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [0] - homer 2025-05-17 00:45:43.737256 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [0] - netdata 2025-05-17 00:45:43.737268 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [0] - openstackclient 2025-05-17 00:45:43.737279 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [0] - phpmyadmin 2025-05-17 00:45:43.737290 | orchestrator | 2025-05-17 00:45:43 | INFO  | A [0] - common 2025-05-17 00:45:43.738238 | orchestrator | 2025-05-17 00:45:43 | INFO  | A [1] -- loadbalancer 2025-05-17 00:45:43.738265 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [2] --- opensearch 2025-05-17 00:45:43.738450 | orchestrator | 2025-05-17 00:45:43 | INFO  | A [2] --- mariadb-ng 2025-05-17 00:45:43.738471 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [3] ---- horizon 2025-05-17 00:45:43.738483 | orchestrator | 2025-05-17 00:45:43 | INFO  | A [3] ---- keystone 2025-05-17 00:45:43.738494 | orchestrator | 2025-05-17 00:45:43 | INFO  | A [4] ----- neutron 2025-05-17 00:45:43.738536 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [5] ------ wait-for-nova 2025-05-17 00:45:43.738600 | orchestrator | 2025-05-17 00:45:43 | INFO  | A [5] ------ octavia 2025-05-17 00:45:43.739076 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [4] ----- barbican 2025-05-17 00:45:43.739099 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [4] ----- designate 2025-05-17 00:45:43.739726 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [4] ----- ironic 2025-05-17 00:45:43.739846 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [4] ----- placement 2025-05-17 00:45:43.739862 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [4] ----- magnum 2025-05-17 00:45:43.739873 | orchestrator | 2025-05-17 00:45:43 | INFO  | A [1] -- openvswitch 2025-05-17 00:45:43.739884 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [2] --- ovn 2025-05-17 00:45:43.740034 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [1] -- memcached 2025-05-17 00:45:43.740052 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [1] -- redis 2025-05-17 00:45:43.740064 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [1] -- rabbitmq-ng 2025-05-17 00:45:43.740075 | orchestrator | 2025-05-17 00:45:43 | INFO  | A [0] - kubernetes 2025-05-17 00:45:43.740086 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [1] -- kubeconfig 2025-05-17 00:45:43.740097 | orchestrator | 2025-05-17 00:45:43 | INFO  | A [1] -- copy-kubeconfig 2025-05-17 00:45:43.740604 | orchestrator | 2025-05-17 00:45:43 | INFO  | A [0] - ceph 2025-05-17 00:45:43.741090 | orchestrator | 2025-05-17 00:45:43 | INFO  | A [1] -- ceph-pools 2025-05-17 00:45:43.741116 | orchestrator | 2025-05-17 00:45:43 | INFO  | A [2] --- copy-ceph-keys 2025-05-17 00:45:43.741570 | orchestrator | 2025-05-17 00:45:43 | INFO  | A [3] ---- cephclient 2025-05-17 00:45:43.741592 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-17 00:45:43.741605 | orchestrator | 2025-05-17 00:45:43 | INFO  | A [4] ----- wait-for-keystone 2025-05-17 00:45:43.741617 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-17 00:45:43.741735 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [5] ------ glance 2025-05-17 00:45:43.741750 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [5] ------ cinder 2025-05-17 00:45:43.741761 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [5] ------ nova 2025-05-17 00:45:43.741827 | orchestrator | 2025-05-17 00:45:43 | INFO  | A [4] ----- prometheus 2025-05-17 00:45:43.741841 | orchestrator | 2025-05-17 00:45:43 | INFO  | D [5] ------ grafana 2025-05-17 00:45:43.865517 | orchestrator | 2025-05-17 00:45:43 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-17 00:45:43.865589 | orchestrator | 2025-05-17 00:45:43 | INFO  | Tasks are running in the background 2025-05-17 00:45:45.708380 | orchestrator | 2025-05-17 00:45:45 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-17 00:45:47.824767 | orchestrator | 2025-05-17 00:45:47 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:45:47.825047 | orchestrator | 2025-05-17 00:45:47 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:45:47.825347 | orchestrator | 2025-05-17 00:45:47 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:45:47.826085 | orchestrator | 2025-05-17 00:45:47 | INFO  | Task 736479db-9e7d-4ae9-8ad3-f4289f416cce is in state STARTED 2025-05-17 00:45:47.826811 | orchestrator | 2025-05-17 00:45:47 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:45:47.827367 | orchestrator | 2025-05-17 00:45:47 | INFO  | Task 5363eb78-722a-415d-86cc-90623644355b is in state STARTED 2025-05-17 00:45:47.827432 | orchestrator | 2025-05-17 00:45:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:45:50.869986 | orchestrator | 2025-05-17 00:45:50 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:45:50.871086 | orchestrator | 2025-05-17 00:45:50 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:45:50.872691 | orchestrator | 2025-05-17 00:45:50 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:45:50.872740 | orchestrator | 2025-05-17 00:45:50 | INFO  | Task 736479db-9e7d-4ae9-8ad3-f4289f416cce is in state STARTED 2025-05-17 00:45:50.872830 | orchestrator | 2025-05-17 00:45:50 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:45:50.873373 | orchestrator | 2025-05-17 00:45:50 | INFO  | Task 5363eb78-722a-415d-86cc-90623644355b is in state STARTED 2025-05-17 00:45:50.873635 | orchestrator | 2025-05-17 00:45:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:45:53.924877 | orchestrator | 2025-05-17 00:45:53 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:45:53.929838 | orchestrator | 2025-05-17 00:45:53 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:45:53.930238 | orchestrator | 2025-05-17 00:45:53 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:45:53.943245 | orchestrator | 2025-05-17 00:45:53 | INFO  | Task 736479db-9e7d-4ae9-8ad3-f4289f416cce is in state STARTED 2025-05-17 00:45:53.943740 | orchestrator | 2025-05-17 00:45:53 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:45:53.946811 | orchestrator | 2025-05-17 00:45:53 | INFO  | Task 5363eb78-722a-415d-86cc-90623644355b is in state STARTED 2025-05-17 00:45:53.946917 | orchestrator | 2025-05-17 00:45:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:45:56.995013 | orchestrator | 2025-05-17 00:45:56 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:45:56.995151 | orchestrator | 2025-05-17 00:45:56 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:45:56.995168 | orchestrator | 2025-05-17 00:45:56 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:45:56.995180 | orchestrator | 2025-05-17 00:45:56 | INFO  | Task 736479db-9e7d-4ae9-8ad3-f4289f416cce is in state STARTED 2025-05-17 00:45:56.995191 | orchestrator | 2025-05-17 00:45:56 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:45:56.995202 | orchestrator | 2025-05-17 00:45:56 | INFO  | Task 5363eb78-722a-415d-86cc-90623644355b is in state STARTED 2025-05-17 00:45:56.995213 | orchestrator | 2025-05-17 00:45:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:00.041667 | orchestrator | 2025-05-17 00:46:00 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:00.042291 | orchestrator | 2025-05-17 00:46:00 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:00.043422 | orchestrator | 2025-05-17 00:46:00 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:46:00.044688 | orchestrator | 2025-05-17 00:46:00 | INFO  | Task 736479db-9e7d-4ae9-8ad3-f4289f416cce is in state STARTED 2025-05-17 00:46:00.045657 | orchestrator | 2025-05-17 00:46:00 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:00.046555 | orchestrator | 2025-05-17 00:46:00 | INFO  | Task 5363eb78-722a-415d-86cc-90623644355b is in state STARTED 2025-05-17 00:46:00.047038 | orchestrator | 2025-05-17 00:46:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:03.103450 | orchestrator | 2025-05-17 00:46:03 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:03.104096 | orchestrator | 2025-05-17 00:46:03 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:03.104982 | orchestrator | 2025-05-17 00:46:03 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:46:03.108744 | orchestrator | 2025-05-17 00:46:03 | INFO  | Task 736479db-9e7d-4ae9-8ad3-f4289f416cce is in state STARTED 2025-05-17 00:46:03.108887 | orchestrator | 2025-05-17 00:46:03 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:03.115183 | orchestrator | 2025-05-17 00:46:03 | INFO  | Task 5363eb78-722a-415d-86cc-90623644355b is in state STARTED 2025-05-17 00:46:03.115214 | orchestrator | 2025-05-17 00:46:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:06.162774 | orchestrator | 2025-05-17 00:46:06 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:06.164510 | orchestrator | 2025-05-17 00:46:06 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:06.165170 | orchestrator | 2025-05-17 00:46:06 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:46:06.167124 | orchestrator | 2025-05-17 00:46:06 | INFO  | Task 736479db-9e7d-4ae9-8ad3-f4289f416cce is in state STARTED 2025-05-17 00:46:06.170513 | orchestrator | 2025-05-17 00:46:06 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:06.171528 | orchestrator | 2025-05-17 00:46:06.171550 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-17 00:46:06.171560 | orchestrator | 2025-05-17 00:46:06.171569 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-17 00:46:06.171577 | orchestrator | Saturday 17 May 2025 00:45:51 +0000 (0:00:00.257) 0:00:00.257 ********** 2025-05-17 00:46:06.171607 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:46:06.171619 | orchestrator | changed: [testbed-manager] 2025-05-17 00:46:06.171628 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:46:06.171635 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:46:06.171643 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:46:06.171651 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:46:06.171659 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:46:06.171667 | orchestrator | 2025-05-17 00:46:06.171675 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-17 00:46:06.171683 | orchestrator | Saturday 17 May 2025 00:45:56 +0000 (0:00:04.758) 0:00:05.015 ********** 2025-05-17 00:46:06.171692 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-17 00:46:06.171700 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-17 00:46:06.171708 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-17 00:46:06.171716 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-17 00:46:06.171724 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-17 00:46:06.171732 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-17 00:46:06.171739 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-17 00:46:06.171747 | orchestrator | 2025-05-17 00:46:06.171755 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-17 00:46:06.171763 | orchestrator | Saturday 17 May 2025 00:45:58 +0000 (0:00:01.946) 0:00:06.962 ********** 2025-05-17 00:46:06.171781 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-17 00:45:57.293442', 'end': '2025-05-17 00:45:57.299209', 'delta': '0:00:00.005767', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-17 00:46:06.171796 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-17 00:45:57.432855', 'end': '2025-05-17 00:45:57.441450', 'delta': '0:00:00.008595', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-17 00:46:06.171805 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-17 00:45:57.260073', 'end': '2025-05-17 00:45:57.269410', 'delta': '0:00:00.009337', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-17 00:46:06.171838 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-17 00:45:57.292551', 'end': '2025-05-17 00:45:57.300842', 'delta': '0:00:00.008291', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-17 00:46:06.171848 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-17 00:45:57.607473', 'end': '2025-05-17 00:45:57.617007', 'delta': '0:00:00.009534', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-17 00:46:06.171860 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-17 00:45:57.743016', 'end': '2025-05-17 00:45:57.751477', 'delta': '0:00:00.008461', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-17 00:46:06.171869 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-17 00:45:57.940680', 'end': '2025-05-17 00:45:57.949656', 'delta': '0:00:00.008976', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-17 00:46:06.171877 | orchestrator | 2025-05-17 00:46:06.171885 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-17 00:46:06.171893 | orchestrator | Saturday 17 May 2025 00:46:00 +0000 (0:00:02.200) 0:00:09.162 ********** 2025-05-17 00:46:06.171901 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-17 00:46:06.171910 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-17 00:46:06.171917 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-17 00:46:06.171925 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-17 00:46:06.171975 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-17 00:46:06.171984 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-17 00:46:06.171992 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-17 00:46:06.172000 | orchestrator | 2025-05-17 00:46:06.172008 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:46:06.172016 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:46:06.172026 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:46:06.172034 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:46:06.172047 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:46:06.172056 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:46:06.172064 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:46:06.172072 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:46:06.172080 | orchestrator | 2025-05-17 00:46:06.172088 | orchestrator | Saturday 17 May 2025 00:46:03 +0000 (0:00:02.490) 0:00:11.653 ********** 2025-05-17 00:46:06.172095 | orchestrator | =============================================================================== 2025-05-17 00:46:06.172103 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.76s 2025-05-17 00:46:06.172111 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.49s 2025-05-17 00:46:06.172119 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.20s 2025-05-17 00:46:06.172127 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.95s 2025-05-17 00:46:06.172626 | orchestrator | 2025-05-17 00:46:06 | INFO  | Task 5363eb78-722a-415d-86cc-90623644355b is in state SUCCESS 2025-05-17 00:46:06.172868 | orchestrator | 2025-05-17 00:46:06 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:06.172884 | orchestrator | 2025-05-17 00:46:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:09.221600 | orchestrator | 2025-05-17 00:46:09 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:09.226357 | orchestrator | 2025-05-17 00:46:09 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:09.226422 | orchestrator | 2025-05-17 00:46:09 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:46:09.226436 | orchestrator | 2025-05-17 00:46:09 | INFO  | Task 736479db-9e7d-4ae9-8ad3-f4289f416cce is in state STARTED 2025-05-17 00:46:09.229653 | orchestrator | 2025-05-17 00:46:09 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:09.229702 | orchestrator | 2025-05-17 00:46:09 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:09.229722 | orchestrator | 2025-05-17 00:46:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:12.288088 | orchestrator | 2025-05-17 00:46:12 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:12.289341 | orchestrator | 2025-05-17 00:46:12 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:12.291849 | orchestrator | 2025-05-17 00:46:12 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:46:12.295178 | orchestrator | 2025-05-17 00:46:12 | INFO  | Task 736479db-9e7d-4ae9-8ad3-f4289f416cce is in state STARTED 2025-05-17 00:46:12.295255 | orchestrator | 2025-05-17 00:46:12 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:12.298112 | orchestrator | 2025-05-17 00:46:12 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:12.298150 | orchestrator | 2025-05-17 00:46:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:15.371537 | orchestrator | 2025-05-17 00:46:15 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:15.371641 | orchestrator | 2025-05-17 00:46:15 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:15.371656 | orchestrator | 2025-05-17 00:46:15 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:46:15.371668 | orchestrator | 2025-05-17 00:46:15 | INFO  | Task 736479db-9e7d-4ae9-8ad3-f4289f416cce is in state STARTED 2025-05-17 00:46:15.371680 | orchestrator | 2025-05-17 00:46:15 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:15.371691 | orchestrator | 2025-05-17 00:46:15 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:15.371702 | orchestrator | 2025-05-17 00:46:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:18.419191 | orchestrator | 2025-05-17 00:46:18 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:18.420774 | orchestrator | 2025-05-17 00:46:18 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:18.423177 | orchestrator | 2025-05-17 00:46:18 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:46:18.426267 | orchestrator | 2025-05-17 00:46:18 | INFO  | Task 736479db-9e7d-4ae9-8ad3-f4289f416cce is in state STARTED 2025-05-17 00:46:18.427879 | orchestrator | 2025-05-17 00:46:18 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:18.429403 | orchestrator | 2025-05-17 00:46:18 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:18.429506 | orchestrator | 2025-05-17 00:46:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:21.486281 | orchestrator | 2025-05-17 00:46:21 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:21.486479 | orchestrator | 2025-05-17 00:46:21 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:21.490279 | orchestrator | 2025-05-17 00:46:21 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:46:21.491851 | orchestrator | 2025-05-17 00:46:21 | INFO  | Task 736479db-9e7d-4ae9-8ad3-f4289f416cce is in state STARTED 2025-05-17 00:46:21.493522 | orchestrator | 2025-05-17 00:46:21 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:21.495100 | orchestrator | 2025-05-17 00:46:21 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:21.495124 | orchestrator | 2025-05-17 00:46:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:24.554832 | orchestrator | 2025-05-17 00:46:24 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:24.555120 | orchestrator | 2025-05-17 00:46:24 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:24.555735 | orchestrator | 2025-05-17 00:46:24 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:46:24.556156 | orchestrator | 2025-05-17 00:46:24 | INFO  | Task 736479db-9e7d-4ae9-8ad3-f4289f416cce is in state SUCCESS 2025-05-17 00:46:24.558158 | orchestrator | 2025-05-17 00:46:24 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:24.559076 | orchestrator | 2025-05-17 00:46:24 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:24.559175 | orchestrator | 2025-05-17 00:46:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:27.611269 | orchestrator | 2025-05-17 00:46:27 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:27.612189 | orchestrator | 2025-05-17 00:46:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:46:27.612223 | orchestrator | 2025-05-17 00:46:27 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:27.614890 | orchestrator | 2025-05-17 00:46:27 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:46:27.615166 | orchestrator | 2025-05-17 00:46:27 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:27.617045 | orchestrator | 2025-05-17 00:46:27 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:27.617064 | orchestrator | 2025-05-17 00:46:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:30.664833 | orchestrator | 2025-05-17 00:46:30 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:30.666844 | orchestrator | 2025-05-17 00:46:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:46:30.667454 | orchestrator | 2025-05-17 00:46:30 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:30.669108 | orchestrator | 2025-05-17 00:46:30 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:46:30.672161 | orchestrator | 2025-05-17 00:46:30 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:30.673316 | orchestrator | 2025-05-17 00:46:30 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:30.673595 | orchestrator | 2025-05-17 00:46:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:33.843508 | orchestrator | 2025-05-17 00:46:33 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:33.845727 | orchestrator | 2025-05-17 00:46:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:46:33.845795 | orchestrator | 2025-05-17 00:46:33 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:33.845815 | orchestrator | 2025-05-17 00:46:33 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:46:33.847618 | orchestrator | 2025-05-17 00:46:33 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:33.849627 | orchestrator | 2025-05-17 00:46:33 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:33.850181 | orchestrator | 2025-05-17 00:46:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:36.946801 | orchestrator | 2025-05-17 00:46:36 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:36.951237 | orchestrator | 2025-05-17 00:46:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:46:36.957100 | orchestrator | 2025-05-17 00:46:36 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:36.963248 | orchestrator | 2025-05-17 00:46:36 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:46:36.963292 | orchestrator | 2025-05-17 00:46:36 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:36.963359 | orchestrator | 2025-05-17 00:46:36 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:36.963380 | orchestrator | 2025-05-17 00:46:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:40.002725 | orchestrator | 2025-05-17 00:46:40 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:40.006936 | orchestrator | 2025-05-17 00:46:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:46:40.007068 | orchestrator | 2025-05-17 00:46:40 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:40.008922 | orchestrator | 2025-05-17 00:46:40 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:46:40.009587 | orchestrator | 2025-05-17 00:46:40 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:40.013262 | orchestrator | 2025-05-17 00:46:40 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:40.013323 | orchestrator | 2025-05-17 00:46:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:43.073100 | orchestrator | 2025-05-17 00:46:43 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:43.074289 | orchestrator | 2025-05-17 00:46:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:46:43.075032 | orchestrator | 2025-05-17 00:46:43 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:43.082552 | orchestrator | 2025-05-17 00:46:43 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state STARTED 2025-05-17 00:46:43.083503 | orchestrator | 2025-05-17 00:46:43 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:43.085779 | orchestrator | 2025-05-17 00:46:43 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:43.085818 | orchestrator | 2025-05-17 00:46:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:46.127438 | orchestrator | 2025-05-17 00:46:46 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:46.129970 | orchestrator | 2025-05-17 00:46:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:46:46.130619 | orchestrator | 2025-05-17 00:46:46 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:46.131064 | orchestrator | 2025-05-17 00:46:46 | INFO  | Task 826d69a1-36bd-4847-b227-a70a949ec1d6 is in state SUCCESS 2025-05-17 00:46:46.133134 | orchestrator | 2025-05-17 00:46:46 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:46.133704 | orchestrator | 2025-05-17 00:46:46 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:46.133730 | orchestrator | 2025-05-17 00:46:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:49.182852 | orchestrator | 2025-05-17 00:46:49 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:49.184671 | orchestrator | 2025-05-17 00:46:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:46:49.186210 | orchestrator | 2025-05-17 00:46:49 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:49.188103 | orchestrator | 2025-05-17 00:46:49 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:49.189713 | orchestrator | 2025-05-17 00:46:49 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:49.189762 | orchestrator | 2025-05-17 00:46:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:52.234731 | orchestrator | 2025-05-17 00:46:52 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:52.235816 | orchestrator | 2025-05-17 00:46:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:46:52.236490 | orchestrator | 2025-05-17 00:46:52 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:52.236516 | orchestrator | 2025-05-17 00:46:52 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:52.236923 | orchestrator | 2025-05-17 00:46:52 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:52.236982 | orchestrator | 2025-05-17 00:46:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:55.294102 | orchestrator | 2025-05-17 00:46:55 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:55.294401 | orchestrator | 2025-05-17 00:46:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:46:55.294915 | orchestrator | 2025-05-17 00:46:55 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:55.295374 | orchestrator | 2025-05-17 00:46:55 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:55.296790 | orchestrator | 2025-05-17 00:46:55 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:55.296808 | orchestrator | 2025-05-17 00:46:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:46:58.335172 | orchestrator | 2025-05-17 00:46:58 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:46:58.341045 | orchestrator | 2025-05-17 00:46:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:46:58.346657 | orchestrator | 2025-05-17 00:46:58 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:46:58.353053 | orchestrator | 2025-05-17 00:46:58 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:46:58.366694 | orchestrator | 2025-05-17 00:46:58 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:46:58.366753 | orchestrator | 2025-05-17 00:46:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:01.395603 | orchestrator | 2025-05-17 00:47:01 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:01.395712 | orchestrator | 2025-05-17 00:47:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:01.397477 | orchestrator | 2025-05-17 00:47:01 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state STARTED 2025-05-17 00:47:01.397522 | orchestrator | 2025-05-17 00:47:01 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:01.397533 | orchestrator | 2025-05-17 00:47:01 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:47:01.397542 | orchestrator | 2025-05-17 00:47:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:04.438314 | orchestrator | 2025-05-17 00:47:04 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:04.439588 | orchestrator | 2025-05-17 00:47:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:04.440181 | orchestrator | 2025-05-17 00:47:04 | INFO  | Task d2f47766-02e9-4c2e-923b-86b7fc350a2a is in state SUCCESS 2025-05-17 00:47:04.440483 | orchestrator | 2025-05-17 00:47:04.440508 | orchestrator | 2025-05-17 00:47:04.440521 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-17 00:47:04.440533 | orchestrator | 2025-05-17 00:47:04.440560 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-17 00:47:04.440572 | orchestrator | Saturday 17 May 2025 00:45:50 +0000 (0:00:00.409) 0:00:00.409 ********** 2025-05-17 00:47:04.440583 | orchestrator | ok: [testbed-manager] => { 2025-05-17 00:47:04.440599 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-17 00:47:04.440612 | orchestrator | } 2025-05-17 00:47:04.440624 | orchestrator | 2025-05-17 00:47:04.440635 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-17 00:47:04.440646 | orchestrator | Saturday 17 May 2025 00:45:50 +0000 (0:00:00.255) 0:00:00.664 ********** 2025-05-17 00:47:04.440656 | orchestrator | ok: [testbed-manager] 2025-05-17 00:47:04.440668 | orchestrator | 2025-05-17 00:47:04.440679 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-17 00:47:04.440689 | orchestrator | Saturday 17 May 2025 00:45:51 +0000 (0:00:01.033) 0:00:01.698 ********** 2025-05-17 00:47:04.440700 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-17 00:47:04.440711 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-17 00:47:04.440723 | orchestrator | 2025-05-17 00:47:04.440733 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-17 00:47:04.440744 | orchestrator | Saturday 17 May 2025 00:45:53 +0000 (0:00:01.266) 0:00:02.965 ********** 2025-05-17 00:47:04.440755 | orchestrator | changed: [testbed-manager] 2025-05-17 00:47:04.440766 | orchestrator | 2025-05-17 00:47:04.440776 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-17 00:47:04.440787 | orchestrator | Saturday 17 May 2025 00:45:55 +0000 (0:00:01.937) 0:00:04.902 ********** 2025-05-17 00:47:04.440798 | orchestrator | changed: [testbed-manager] 2025-05-17 00:47:04.440809 | orchestrator | 2025-05-17 00:47:04.440820 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-17 00:47:04.440831 | orchestrator | Saturday 17 May 2025 00:45:56 +0000 (0:00:01.757) 0:00:06.660 ********** 2025-05-17 00:47:04.440842 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-17 00:47:04.440853 | orchestrator | ok: [testbed-manager] 2025-05-17 00:47:04.440864 | orchestrator | 2025-05-17 00:47:04.440875 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-17 00:47:04.440886 | orchestrator | Saturday 17 May 2025 00:46:21 +0000 (0:00:24.761) 0:00:31.422 ********** 2025-05-17 00:47:04.440896 | orchestrator | changed: [testbed-manager] 2025-05-17 00:47:04.440907 | orchestrator | 2025-05-17 00:47:04.440918 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:47:04.440929 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:47:04.440942 | orchestrator | 2025-05-17 00:47:04.440976 | orchestrator | Saturday 17 May 2025 00:46:23 +0000 (0:00:02.029) 0:00:33.451 ********** 2025-05-17 00:47:04.440988 | orchestrator | =============================================================================== 2025-05-17 00:47:04.440999 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.76s 2025-05-17 00:47:04.441010 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.03s 2025-05-17 00:47:04.441020 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.94s 2025-05-17 00:47:04.441045 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.76s 2025-05-17 00:47:04.441057 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.27s 2025-05-17 00:47:04.441067 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.03s 2025-05-17 00:47:04.441078 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.26s 2025-05-17 00:47:04.441089 | orchestrator | 2025-05-17 00:47:04.441100 | orchestrator | 2025-05-17 00:47:04.441113 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-17 00:47:04.441126 | orchestrator | 2025-05-17 00:47:04.441140 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-17 00:47:04.441153 | orchestrator | Saturday 17 May 2025 00:45:50 +0000 (0:00:00.273) 0:00:00.273 ********** 2025-05-17 00:47:04.441166 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-17 00:47:04.441180 | orchestrator | 2025-05-17 00:47:04.441193 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-17 00:47:04.441207 | orchestrator | Saturday 17 May 2025 00:45:50 +0000 (0:00:00.346) 0:00:00.620 ********** 2025-05-17 00:47:04.441221 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-17 00:47:04.441232 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-17 00:47:04.441243 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-17 00:47:04.441253 | orchestrator | 2025-05-17 00:47:04.441265 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-17 00:47:04.441276 | orchestrator | Saturday 17 May 2025 00:45:52 +0000 (0:00:01.286) 0:00:01.906 ********** 2025-05-17 00:47:04.441287 | orchestrator | changed: [testbed-manager] 2025-05-17 00:47:04.441298 | orchestrator | 2025-05-17 00:47:04.441308 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-17 00:47:04.441319 | orchestrator | Saturday 17 May 2025 00:45:53 +0000 (0:00:01.546) 0:00:03.452 ********** 2025-05-17 00:47:04.441330 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-17 00:47:04.441340 | orchestrator | ok: [testbed-manager] 2025-05-17 00:47:04.441351 | orchestrator | 2025-05-17 00:47:04.441374 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-17 00:47:04.441392 | orchestrator | Saturday 17 May 2025 00:46:35 +0000 (0:00:41.851) 0:00:45.303 ********** 2025-05-17 00:47:04.441403 | orchestrator | changed: [testbed-manager] 2025-05-17 00:47:04.441414 | orchestrator | 2025-05-17 00:47:04.441424 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-17 00:47:04.441435 | orchestrator | Saturday 17 May 2025 00:46:37 +0000 (0:00:01.736) 0:00:47.040 ********** 2025-05-17 00:47:04.441446 | orchestrator | ok: [testbed-manager] 2025-05-17 00:47:04.441457 | orchestrator | 2025-05-17 00:47:04.441468 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-17 00:47:04.441479 | orchestrator | Saturday 17 May 2025 00:46:38 +0000 (0:00:01.011) 0:00:48.052 ********** 2025-05-17 00:47:04.441489 | orchestrator | changed: [testbed-manager] 2025-05-17 00:47:04.441500 | orchestrator | 2025-05-17 00:47:04.441511 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-17 00:47:04.441521 | orchestrator | Saturday 17 May 2025 00:46:40 +0000 (0:00:01.984) 0:00:50.036 ********** 2025-05-17 00:47:04.441532 | orchestrator | changed: [testbed-manager] 2025-05-17 00:47:04.441543 | orchestrator | 2025-05-17 00:47:04.441554 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-17 00:47:04.441565 | orchestrator | Saturday 17 May 2025 00:46:41 +0000 (0:00:01.239) 0:00:51.276 ********** 2025-05-17 00:47:04.441575 | orchestrator | changed: [testbed-manager] 2025-05-17 00:47:04.441586 | orchestrator | 2025-05-17 00:47:04.441597 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-17 00:47:04.441616 | orchestrator | Saturday 17 May 2025 00:46:42 +0000 (0:00:00.907) 0:00:52.183 ********** 2025-05-17 00:47:04.441627 | orchestrator | ok: [testbed-manager] 2025-05-17 00:47:04.441638 | orchestrator | 2025-05-17 00:47:04.441649 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:47:04.441660 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:47:04.441670 | orchestrator | 2025-05-17 00:47:04.441682 | orchestrator | Saturday 17 May 2025 00:46:42 +0000 (0:00:00.458) 0:00:52.641 ********** 2025-05-17 00:47:04.441692 | orchestrator | =============================================================================== 2025-05-17 00:47:04.441703 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 41.85s 2025-05-17 00:47:04.441713 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.98s 2025-05-17 00:47:04.441724 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.74s 2025-05-17 00:47:04.441735 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.55s 2025-05-17 00:47:04.441745 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.29s 2025-05-17 00:47:04.441827 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.24s 2025-05-17 00:47:04.441842 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.01s 2025-05-17 00:47:04.441853 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.91s 2025-05-17 00:47:04.441863 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.46s 2025-05-17 00:47:04.441874 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.35s 2025-05-17 00:47:04.441885 | orchestrator | 2025-05-17 00:47:04.442207 | orchestrator | 2025-05-17 00:47:04.442239 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 00:47:04.442259 | orchestrator | 2025-05-17 00:47:04.442288 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 00:47:04.442308 | orchestrator | Saturday 17 May 2025 00:45:52 +0000 (0:00:00.467) 0:00:00.467 ********** 2025-05-17 00:47:04.442325 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-17 00:47:04.442342 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-17 00:47:04.442360 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-17 00:47:04.442379 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-17 00:47:04.442397 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-17 00:47:04.442415 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-17 00:47:04.442432 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-17 00:47:04.442450 | orchestrator | 2025-05-17 00:47:04.442467 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-17 00:47:04.442483 | orchestrator | 2025-05-17 00:47:04.442502 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-17 00:47:04.442519 | orchestrator | Saturday 17 May 2025 00:45:53 +0000 (0:00:01.416) 0:00:01.884 ********** 2025-05-17 00:47:04.442557 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:47:04.442579 | orchestrator | 2025-05-17 00:47:04.442598 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-17 00:47:04.442615 | orchestrator | Saturday 17 May 2025 00:45:55 +0000 (0:00:02.073) 0:00:03.957 ********** 2025-05-17 00:47:04.442633 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:47:04.442653 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:47:04.442671 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:47:04.442690 | orchestrator | ok: [testbed-manager] 2025-05-17 00:47:04.442739 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:47:04.442759 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:47:04.442778 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:47:04.442796 | orchestrator | 2025-05-17 00:47:04.442815 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-17 00:47:04.442835 | orchestrator | Saturday 17 May 2025 00:45:57 +0000 (0:00:02.086) 0:00:06.044 ********** 2025-05-17 00:47:04.442854 | orchestrator | ok: [testbed-manager] 2025-05-17 00:47:04.443028 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:47:04.443058 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:47:04.443077 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:47:04.443099 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:47:04.443119 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:47:04.443139 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:47:04.443157 | orchestrator | 2025-05-17 00:47:04.443176 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-17 00:47:04.443195 | orchestrator | Saturday 17 May 2025 00:46:01 +0000 (0:00:03.589) 0:00:09.634 ********** 2025-05-17 00:47:04.443214 | orchestrator | changed: [testbed-manager] 2025-05-17 00:47:04.443233 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:47:04.443253 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:47:04.443272 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:47:04.443291 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:47:04.443311 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:47:04.443329 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:47:04.443348 | orchestrator | 2025-05-17 00:47:04.443367 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-17 00:47:04.443387 | orchestrator | Saturday 17 May 2025 00:46:03 +0000 (0:00:02.027) 0:00:11.661 ********** 2025-05-17 00:47:04.443405 | orchestrator | changed: [testbed-manager] 2025-05-17 00:47:04.443424 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:47:04.443443 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:47:04.443461 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:47:04.443480 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:47:04.443499 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:47:04.443517 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:47:04.443535 | orchestrator | 2025-05-17 00:47:04.443554 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-17 00:47:04.443573 | orchestrator | Saturday 17 May 2025 00:46:13 +0000 (0:00:10.192) 0:00:21.854 ********** 2025-05-17 00:47:04.443592 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:47:04.443611 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:47:04.443629 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:47:04.443648 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:47:04.443667 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:47:04.443686 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:47:04.443705 | orchestrator | changed: [testbed-manager] 2025-05-17 00:47:04.443723 | orchestrator | 2025-05-17 00:47:04.443741 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-17 00:47:04.443761 | orchestrator | Saturday 17 May 2025 00:46:37 +0000 (0:00:24.405) 0:00:46.260 ********** 2025-05-17 00:47:04.443780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:47:04.443801 | orchestrator | 2025-05-17 00:47:04.443820 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-17 00:47:04.443839 | orchestrator | Saturday 17 May 2025 00:46:41 +0000 (0:00:03.247) 0:00:49.507 ********** 2025-05-17 00:47:04.443858 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-17 00:47:04.443878 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-17 00:47:04.444272 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-17 00:47:04.444296 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-17 00:47:04.444357 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-17 00:47:04.444380 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-17 00:47:04.444398 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-17 00:47:04.444416 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-17 00:47:04.444435 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-17 00:47:04.444453 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-17 00:47:04.444473 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-17 00:47:04.444491 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-17 00:47:04.444511 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-17 00:47:04.444530 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-17 00:47:04.444549 | orchestrator | 2025-05-17 00:47:04.444566 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-17 00:47:04.444584 | orchestrator | Saturday 17 May 2025 00:46:47 +0000 (0:00:05.888) 0:00:55.396 ********** 2025-05-17 00:47:04.444600 | orchestrator | ok: [testbed-manager] 2025-05-17 00:47:04.444617 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:47:04.444633 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:47:04.444650 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:47:04.444667 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:47:04.444684 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:47:04.444701 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:47:04.444717 | orchestrator | 2025-05-17 00:47:04.444733 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-17 00:47:04.444751 | orchestrator | Saturday 17 May 2025 00:46:48 +0000 (0:00:01.390) 0:00:56.786 ********** 2025-05-17 00:47:04.444767 | orchestrator | changed: [testbed-manager] 2025-05-17 00:47:04.444784 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:47:04.444800 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:47:04.444817 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:47:04.444833 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:47:04.444850 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:47:04.444867 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:47:04.444883 | orchestrator | 2025-05-17 00:47:04.444899 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-17 00:47:04.444916 | orchestrator | Saturday 17 May 2025 00:46:50 +0000 (0:00:01.869) 0:00:58.656 ********** 2025-05-17 00:47:04.444931 | orchestrator | ok: [testbed-manager] 2025-05-17 00:47:04.444947 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:47:04.444990 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:47:04.445007 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:47:04.445023 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:47:04.445039 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:47:04.445056 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:47:04.445072 | orchestrator | 2025-05-17 00:47:04.445089 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-17 00:47:04.445106 | orchestrator | Saturday 17 May 2025 00:46:51 +0000 (0:00:01.411) 0:01:00.067 ********** 2025-05-17 00:47:04.445122 | orchestrator | ok: [testbed-manager] 2025-05-17 00:47:04.445140 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:47:04.445156 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:47:04.445172 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:47:04.445189 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:47:04.445206 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:47:04.445222 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:47:04.445239 | orchestrator | 2025-05-17 00:47:04.445255 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-17 00:47:04.445272 | orchestrator | Saturday 17 May 2025 00:46:53 +0000 (0:00:02.097) 0:01:02.165 ********** 2025-05-17 00:47:04.445289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-17 00:47:04.445317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:47:04.445335 | orchestrator | 2025-05-17 00:47:04.445352 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-17 00:47:04.445368 | orchestrator | Saturday 17 May 2025 00:46:56 +0000 (0:00:02.240) 0:01:04.405 ********** 2025-05-17 00:47:04.445384 | orchestrator | changed: [testbed-manager] 2025-05-17 00:47:04.445401 | orchestrator | 2025-05-17 00:47:04.445417 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-17 00:47:04.445433 | orchestrator | Saturday 17 May 2025 00:46:57 +0000 (0:00:01.829) 0:01:06.234 ********** 2025-05-17 00:47:04.445450 | orchestrator | changed: [testbed-manager] 2025-05-17 00:47:04.445467 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:47:04.445483 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:47:04.445500 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:47:04.445516 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:47:04.445533 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:47:04.445550 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:47:04.445566 | orchestrator | 2025-05-17 00:47:04.445582 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:47:04.445599 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:47:04.445617 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:47:04.445634 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:47:04.445651 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:47:04.445677 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:47:04.445693 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:47:04.445710 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:47:04.445726 | orchestrator | 2025-05-17 00:47:04.445744 | orchestrator | Saturday 17 May 2025 00:47:01 +0000 (0:00:03.513) 0:01:09.747 ********** 2025-05-17 00:47:04.445760 | orchestrator | =============================================================================== 2025-05-17 00:47:04.445777 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 24.41s 2025-05-17 00:47:04.445793 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.19s 2025-05-17 00:47:04.445809 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.89s 2025-05-17 00:47:04.445826 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.59s 2025-05-17 00:47:04.445843 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.51s 2025-05-17 00:47:04.445860 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 3.25s 2025-05-17 00:47:04.445876 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.24s 2025-05-17 00:47:04.445892 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.10s 2025-05-17 00:47:04.445908 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.09s 2025-05-17 00:47:04.445925 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.07s 2025-05-17 00:47:04.445941 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.03s 2025-05-17 00:47:04.445985 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.87s 2025-05-17 00:47:04.446002 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.83s 2025-05-17 00:47:04.446072 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.42s 2025-05-17 00:47:04.446094 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.41s 2025-05-17 00:47:04.446165 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.39s 2025-05-17 00:47:04.446413 | orchestrator | 2025-05-17 00:47:04 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:04.446736 | orchestrator | 2025-05-17 00:47:04 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:47:04.446758 | orchestrator | 2025-05-17 00:47:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:07.478747 | orchestrator | 2025-05-17 00:47:07 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:07.479052 | orchestrator | 2025-05-17 00:47:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:07.479462 | orchestrator | 2025-05-17 00:47:07 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:07.480015 | orchestrator | 2025-05-17 00:47:07 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:47:07.480040 | orchestrator | 2025-05-17 00:47:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:10.520851 | orchestrator | 2025-05-17 00:47:10 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:10.522851 | orchestrator | 2025-05-17 00:47:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:10.522889 | orchestrator | 2025-05-17 00:47:10 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:10.522901 | orchestrator | 2025-05-17 00:47:10 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:47:10.522912 | orchestrator | 2025-05-17 00:47:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:13.568829 | orchestrator | 2025-05-17 00:47:13 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:13.569400 | orchestrator | 2025-05-17 00:47:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:13.570784 | orchestrator | 2025-05-17 00:47:13 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:13.572321 | orchestrator | 2025-05-17 00:47:13 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:47:13.572359 | orchestrator | 2025-05-17 00:47:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:16.633672 | orchestrator | 2025-05-17 00:47:16 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:16.634508 | orchestrator | 2025-05-17 00:47:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:16.636206 | orchestrator | 2025-05-17 00:47:16 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:16.637670 | orchestrator | 2025-05-17 00:47:16 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:47:16.637980 | orchestrator | 2025-05-17 00:47:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:19.700386 | orchestrator | 2025-05-17 00:47:19 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:19.700518 | orchestrator | 2025-05-17 00:47:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:19.700551 | orchestrator | 2025-05-17 00:47:19 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:19.700558 | orchestrator | 2025-05-17 00:47:19 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:47:19.700565 | orchestrator | 2025-05-17 00:47:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:22.738304 | orchestrator | 2025-05-17 00:47:22 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:22.738454 | orchestrator | 2025-05-17 00:47:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:22.739485 | orchestrator | 2025-05-17 00:47:22 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:22.740802 | orchestrator | 2025-05-17 00:47:22 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:47:22.741067 | orchestrator | 2025-05-17 00:47:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:25.784274 | orchestrator | 2025-05-17 00:47:25 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:25.784384 | orchestrator | 2025-05-17 00:47:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:25.784566 | orchestrator | 2025-05-17 00:47:25 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:25.789093 | orchestrator | 2025-05-17 00:47:25 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:47:25.789167 | orchestrator | 2025-05-17 00:47:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:28.820125 | orchestrator | 2025-05-17 00:47:28 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:28.821693 | orchestrator | 2025-05-17 00:47:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:28.825813 | orchestrator | 2025-05-17 00:47:28 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:28.828106 | orchestrator | 2025-05-17 00:47:28 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:47:28.828709 | orchestrator | 2025-05-17 00:47:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:31.896431 | orchestrator | 2025-05-17 00:47:31 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:31.898146 | orchestrator | 2025-05-17 00:47:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:31.904130 | orchestrator | 2025-05-17 00:47:31 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:31.908338 | orchestrator | 2025-05-17 00:47:31 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:47:31.908756 | orchestrator | 2025-05-17 00:47:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:34.946159 | orchestrator | 2025-05-17 00:47:34 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:34.946524 | orchestrator | 2025-05-17 00:47:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:34.947586 | orchestrator | 2025-05-17 00:47:34 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:34.949666 | orchestrator | 2025-05-17 00:47:34 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:47:34.949691 | orchestrator | 2025-05-17 00:47:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:37.986922 | orchestrator | 2025-05-17 00:47:37 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:37.987810 | orchestrator | 2025-05-17 00:47:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:37.989337 | orchestrator | 2025-05-17 00:47:37 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:37.991865 | orchestrator | 2025-05-17 00:47:37 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state STARTED 2025-05-17 00:47:37.991934 | orchestrator | 2025-05-17 00:47:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:41.034711 | orchestrator | 2025-05-17 00:47:41 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:41.036590 | orchestrator | 2025-05-17 00:47:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:41.036632 | orchestrator | 2025-05-17 00:47:41 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:41.037199 | orchestrator | 2025-05-17 00:47:41 | INFO  | Task 1b745e95-1018-46d2-ac08-1c5d4b5c9576 is in state SUCCESS 2025-05-17 00:47:41.037213 | orchestrator | 2025-05-17 00:47:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:44.080724 | orchestrator | 2025-05-17 00:47:44 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:44.080837 | orchestrator | 2025-05-17 00:47:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:44.080913 | orchestrator | 2025-05-17 00:47:44 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:44.080927 | orchestrator | 2025-05-17 00:47:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:47.127892 | orchestrator | 2025-05-17 00:47:47 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:47.129275 | orchestrator | 2025-05-17 00:47:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:47.131918 | orchestrator | 2025-05-17 00:47:47 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:47.131945 | orchestrator | 2025-05-17 00:47:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:50.182472 | orchestrator | 2025-05-17 00:47:50 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:50.188113 | orchestrator | 2025-05-17 00:47:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:50.192598 | orchestrator | 2025-05-17 00:47:50 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:50.194780 | orchestrator | 2025-05-17 00:47:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:53.235800 | orchestrator | 2025-05-17 00:47:53 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:53.236234 | orchestrator | 2025-05-17 00:47:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:53.237260 | orchestrator | 2025-05-17 00:47:53 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:53.237388 | orchestrator | 2025-05-17 00:47:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:56.275135 | orchestrator | 2025-05-17 00:47:56 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:56.275362 | orchestrator | 2025-05-17 00:47:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:56.280201 | orchestrator | 2025-05-17 00:47:56 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:56.280262 | orchestrator | 2025-05-17 00:47:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:47:59.333239 | orchestrator | 2025-05-17 00:47:59 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:47:59.337684 | orchestrator | 2025-05-17 00:47:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:47:59.338828 | orchestrator | 2025-05-17 00:47:59 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:47:59.338867 | orchestrator | 2025-05-17 00:47:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:02.401456 | orchestrator | 2025-05-17 00:48:02 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:48:02.401606 | orchestrator | 2025-05-17 00:48:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:02.401625 | orchestrator | 2025-05-17 00:48:02 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:02.401637 | orchestrator | 2025-05-17 00:48:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:05.452615 | orchestrator | 2025-05-17 00:48:05 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state STARTED 2025-05-17 00:48:05.453614 | orchestrator | 2025-05-17 00:48:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:05.455164 | orchestrator | 2025-05-17 00:48:05 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:05.455699 | orchestrator | 2025-05-17 00:48:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:08.489755 | orchestrator | 2025-05-17 00:48:08 | INFO  | Task ea99c7ba-34de-4603-bbc6-7f2aa7523ddc is in state SUCCESS 2025-05-17 00:48:08.492327 | orchestrator | 2025-05-17 00:48:08.492403 | orchestrator | 2025-05-17 00:48:08.492418 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-17 00:48:08.492430 | orchestrator | 2025-05-17 00:48:08.492442 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-17 00:48:08.492453 | orchestrator | Saturday 17 May 2025 00:46:07 +0000 (0:00:00.265) 0:00:00.265 ********** 2025-05-17 00:48:08.492465 | orchestrator | ok: [testbed-manager] 2025-05-17 00:48:08.492478 | orchestrator | 2025-05-17 00:48:08.492489 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-17 00:48:08.492500 | orchestrator | Saturday 17 May 2025 00:46:09 +0000 (0:00:01.501) 0:00:01.767 ********** 2025-05-17 00:48:08.492511 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-17 00:48:08.492522 | orchestrator | 2025-05-17 00:48:08.492533 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-17 00:48:08.492544 | orchestrator | Saturday 17 May 2025 00:46:10 +0000 (0:00:00.855) 0:00:02.622 ********** 2025-05-17 00:48:08.492555 | orchestrator | changed: [testbed-manager] 2025-05-17 00:48:08.492567 | orchestrator | 2025-05-17 00:48:08.492579 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-17 00:48:08.492589 | orchestrator | Saturday 17 May 2025 00:46:12 +0000 (0:00:01.838) 0:00:04.460 ********** 2025-05-17 00:48:08.492600 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-17 00:48:08.492611 | orchestrator | ok: [testbed-manager] 2025-05-17 00:48:08.492622 | orchestrator | 2025-05-17 00:48:08.492633 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-17 00:48:08.492644 | orchestrator | Saturday 17 May 2025 00:47:13 +0000 (0:01:01.402) 0:01:05.862 ********** 2025-05-17 00:48:08.492654 | orchestrator | changed: [testbed-manager] 2025-05-17 00:48:08.492665 | orchestrator | 2025-05-17 00:48:08.492683 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:48:08.492716 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:48:08.492728 | orchestrator | 2025-05-17 00:48:08.492739 | orchestrator | Saturday 17 May 2025 00:47:38 +0000 (0:00:24.686) 0:01:30.549 ********** 2025-05-17 00:48:08.492750 | orchestrator | =============================================================================== 2025-05-17 00:48:08.492760 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 61.40s 2025-05-17 00:48:08.492772 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 24.69s 2025-05-17 00:48:08.492786 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.84s 2025-05-17 00:48:08.492799 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.50s 2025-05-17 00:48:08.492811 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.86s 2025-05-17 00:48:08.492824 | orchestrator | 2025-05-17 00:48:08.492837 | orchestrator | 2025-05-17 00:48:08.492851 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-17 00:48:08.492864 | orchestrator | 2025-05-17 00:48:08.492877 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-17 00:48:08.492890 | orchestrator | Saturday 17 May 2025 00:45:46 +0000 (0:00:00.339) 0:00:00.339 ********** 2025-05-17 00:48:08.492904 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:48:08.492918 | orchestrator | 2025-05-17 00:48:08.492934 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-17 00:48:08.492953 | orchestrator | Saturday 17 May 2025 00:45:48 +0000 (0:00:01.401) 0:00:01.740 ********** 2025-05-17 00:48:08.493037 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-17 00:48:08.493062 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-17 00:48:08.493080 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-17 00:48:08.493100 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-17 00:48:08.493118 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-17 00:48:08.493137 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-17 00:48:08.493155 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-17 00:48:08.493174 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-17 00:48:08.493192 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-17 00:48:08.493206 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-17 00:48:08.493218 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-17 00:48:08.493229 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-17 00:48:08.493239 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-17 00:48:08.493250 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-17 00:48:08.493261 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-17 00:48:08.493272 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-17 00:48:08.493291 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-17 00:48:08.493329 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-17 00:48:08.493348 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-17 00:48:08.493382 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-17 00:48:08.493403 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-17 00:48:08.493422 | orchestrator | 2025-05-17 00:48:08.493441 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-17 00:48:08.493461 | orchestrator | Saturday 17 May 2025 00:45:52 +0000 (0:00:03.651) 0:00:05.392 ********** 2025-05-17 00:48:08.493480 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:48:08.493500 | orchestrator | 2025-05-17 00:48:08.493519 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-17 00:48:08.493538 | orchestrator | Saturday 17 May 2025 00:45:53 +0000 (0:00:01.572) 0:00:06.964 ********** 2025-05-17 00:48:08.493572 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.493598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.493613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.493625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.493643 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.493663 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.493723 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.493745 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.493769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.493787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.493805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.493822 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.493859 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.493878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.493903 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.493927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.493947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.493966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.494117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.494139 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.494160 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.494172 | orchestrator | 2025-05-17 00:48:08.494184 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-17 00:48:08.494195 | orchestrator | Saturday 17 May 2025 00:45:58 +0000 (0:00:04.607) 0:00:11.572 ********** 2025-05-17 00:48:08.494218 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-17 00:48:08.494231 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494249 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494262 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:48:08.494274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-17 00:48:08.494286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-17 00:48:08.494345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494369 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:48:08.494381 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:48:08.494397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-17 00:48:08.494409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494432 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:48:08.494444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-17 00:48:08.494468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494491 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:48:08.494509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-17 00:48:08.494521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494547 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:48:08.494558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-17 00:48:08.494568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494594 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:48:08.494604 | orchestrator | 2025-05-17 00:48:08.494614 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-17 00:48:08.494624 | orchestrator | Saturday 17 May 2025 00:46:00 +0000 (0:00:01.864) 0:00:13.436 ********** 2025-05-17 00:48:08.494634 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-17 00:48:08.494650 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494660 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-17 00:48:08.494686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494712 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:48:08.494722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-17 00:48:08.494732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494758 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:48:08.494768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-17 00:48:08.494778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494803 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:48:08.494813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-17 00:48:08.494828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494848 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:48:08.494858 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:48:08.494868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-17 00:48:08.494884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494905 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:48:08.494919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-17 00:48:08.494929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.494955 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:48:08.494965 | orchestrator | 2025-05-17 00:48:08.494996 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-17 00:48:08.495007 | orchestrator | Saturday 17 May 2025 00:46:02 +0000 (0:00:02.405) 0:00:15.841 ********** 2025-05-17 00:48:08.495017 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:48:08.495026 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:48:08.495036 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:48:08.495046 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:48:08.495056 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:48:08.495065 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:48:08.495075 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:48:08.495084 | orchestrator | 2025-05-17 00:48:08.495094 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-17 00:48:08.495103 | orchestrator | Saturday 17 May 2025 00:46:03 +0000 (0:00:00.817) 0:00:16.659 ********** 2025-05-17 00:48:08.495113 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:48:08.495123 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:48:08.495132 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:48:08.495142 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:48:08.495151 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:48:08.495161 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:48:08.495170 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:48:08.495180 | orchestrator | 2025-05-17 00:48:08.495262 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-05-17 00:48:08.495275 | orchestrator | Saturday 17 May 2025 00:46:04 +0000 (0:00:00.838) 0:00:17.498 ********** 2025-05-17 00:48:08.495285 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:48:08.495295 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:48:08.495305 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:48:08.495315 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:48:08.495325 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:48:08.495334 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:48:08.495344 | orchestrator | changed: [testbed-manager] 2025-05-17 00:48:08.495354 | orchestrator | 2025-05-17 00:48:08.495363 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-05-17 00:48:08.495373 | orchestrator | Saturday 17 May 2025 00:46:38 +0000 (0:00:34.342) 0:00:51.840 ********** 2025-05-17 00:48:08.495383 | orchestrator | ok: [testbed-manager] 2025-05-17 00:48:08.495401 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:48:08.495411 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:48:08.495421 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:48:08.495431 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:48:08.495441 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:48:08.495450 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:48:08.495460 | orchestrator | 2025-05-17 00:48:08.495469 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-17 00:48:08.495479 | orchestrator | Saturday 17 May 2025 00:46:41 +0000 (0:00:03.375) 0:00:55.216 ********** 2025-05-17 00:48:08.495497 | orchestrator | ok: [testbed-manager] 2025-05-17 00:48:08.495506 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:48:08.495516 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:48:08.495526 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:48:08.495536 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:48:08.495545 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:48:08.495555 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:48:08.495564 | orchestrator | 2025-05-17 00:48:08.495574 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-05-17 00:48:08.495584 | orchestrator | Saturday 17 May 2025 00:46:42 +0000 (0:00:01.089) 0:00:56.306 ********** 2025-05-17 00:48:08.495593 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:48:08.495603 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:48:08.495612 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:48:08.495622 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:48:08.495632 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:48:08.495641 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:48:08.495650 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:48:08.495660 | orchestrator | 2025-05-17 00:48:08.495670 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-17 00:48:08.495679 | orchestrator | Saturday 17 May 2025 00:46:44 +0000 (0:00:01.182) 0:00:57.488 ********** 2025-05-17 00:48:08.495689 | orchestrator | skipping: [testbed-manager] 2025-05-17 00:48:08.495699 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:48:08.495708 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:48:08.495718 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:48:08.495727 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:48:08.495737 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:48:08.495746 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:48:08.495756 | orchestrator | 2025-05-17 00:48:08.495766 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-17 00:48:08.495775 | orchestrator | Saturday 17 May 2025 00:46:45 +0000 (0:00:01.014) 0:00:58.502 ********** 2025-05-17 00:48:08.495786 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.495796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.495807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.495817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.495841 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.495857 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.495873 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.495884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.495894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.495904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.495914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.495988 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.496008 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.496032 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.496050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.496067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.496079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.496089 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.496111 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.496134 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.496145 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.496154 | orchestrator | 2025-05-17 00:48:08.496164 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-17 00:48:08.496174 | orchestrator | Saturday 17 May 2025 00:46:50 +0000 (0:00:05.474) 0:01:03.977 ********** 2025-05-17 00:48:08.496185 | orchestrator | [WARNING]: Skipped 2025-05-17 00:48:08.496196 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-17 00:48:08.496206 | orchestrator | to this access issue: 2025-05-17 00:48:08.496215 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-17 00:48:08.496225 | orchestrator | directory 2025-05-17 00:48:08.496235 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-17 00:48:08.496245 | orchestrator | 2025-05-17 00:48:08.496254 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-17 00:48:08.496264 | orchestrator | Saturday 17 May 2025 00:46:51 +0000 (0:00:00.939) 0:01:04.916 ********** 2025-05-17 00:48:08.496273 | orchestrator | [WARNING]: Skipped 2025-05-17 00:48:08.496283 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-17 00:48:08.496293 | orchestrator | to this access issue: 2025-05-17 00:48:08.496307 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-17 00:48:08.496317 | orchestrator | directory 2025-05-17 00:48:08.496327 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-17 00:48:08.496337 | orchestrator | 2025-05-17 00:48:08.496346 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-17 00:48:08.496356 | orchestrator | Saturday 17 May 2025 00:46:52 +0000 (0:00:00.844) 0:01:05.761 ********** 2025-05-17 00:48:08.496366 | orchestrator | [WARNING]: Skipped 2025-05-17 00:48:08.496375 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-17 00:48:08.496385 | orchestrator | to this access issue: 2025-05-17 00:48:08.496395 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-17 00:48:08.496404 | orchestrator | directory 2025-05-17 00:48:08.496414 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-17 00:48:08.496424 | orchestrator | 2025-05-17 00:48:08.496433 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-17 00:48:08.496443 | orchestrator | Saturday 17 May 2025 00:46:53 +0000 (0:00:00.872) 0:01:06.633 ********** 2025-05-17 00:48:08.496459 | orchestrator | [WARNING]: Skipped 2025-05-17 00:48:08.496469 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-17 00:48:08.496479 | orchestrator | to this access issue: 2025-05-17 00:48:08.496488 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-17 00:48:08.496498 | orchestrator | directory 2025-05-17 00:48:08.496507 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-17 00:48:08.496517 | orchestrator | 2025-05-17 00:48:08.496527 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-05-17 00:48:08.496536 | orchestrator | Saturday 17 May 2025 00:46:53 +0000 (0:00:00.663) 0:01:07.297 ********** 2025-05-17 00:48:08.496546 | orchestrator | changed: [testbed-manager] 2025-05-17 00:48:08.496556 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:48:08.496565 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:48:08.496575 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:48:08.496585 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:48:08.496594 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:48:08.496604 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:48:08.496614 | orchestrator | 2025-05-17 00:48:08.496624 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-17 00:48:08.496633 | orchestrator | Saturday 17 May 2025 00:46:57 +0000 (0:00:03.802) 0:01:11.099 ********** 2025-05-17 00:48:08.496643 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-17 00:48:08.496653 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-17 00:48:08.496662 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-17 00:48:08.496672 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-17 00:48:08.496681 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-17 00:48:08.496691 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-17 00:48:08.496700 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-17 00:48:08.496710 | orchestrator | 2025-05-17 00:48:08.496720 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-17 00:48:08.496730 | orchestrator | Saturday 17 May 2025 00:47:00 +0000 (0:00:03.067) 0:01:14.167 ********** 2025-05-17 00:48:08.496740 | orchestrator | changed: [testbed-manager] 2025-05-17 00:48:08.496749 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:48:08.496759 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:48:08.496769 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:48:08.496778 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:48:08.496793 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:48:08.496804 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:48:08.496813 | orchestrator | 2025-05-17 00:48:08.496823 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-17 00:48:08.496832 | orchestrator | Saturday 17 May 2025 00:47:03 +0000 (0:00:02.208) 0:01:16.375 ********** 2025-05-17 00:48:08.496842 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.496858 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.496875 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.496896 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.496906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.496916 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.496933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.496944 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.496955 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.497028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.497043 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.497053 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.497074 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497091 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.497102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.497119 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497149 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.497161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:48:08.497172 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497182 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497192 | orchestrator | 2025-05-17 00:48:08.497202 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-17 00:48:08.497212 | orchestrator | Saturday 17 May 2025 00:47:05 +0000 (0:00:02.209) 0:01:18.585 ********** 2025-05-17 00:48:08.497222 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-17 00:48:08.497231 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-17 00:48:08.497240 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-17 00:48:08.497250 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-17 00:48:08.497259 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-17 00:48:08.497269 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-17 00:48:08.497278 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-17 00:48:08.497288 | orchestrator | 2025-05-17 00:48:08.497298 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-17 00:48:08.497327 | orchestrator | Saturday 17 May 2025 00:47:07 +0000 (0:00:01.935) 0:01:20.520 ********** 2025-05-17 00:48:08.497337 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-17 00:48:08.497345 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-17 00:48:08.497352 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-17 00:48:08.497360 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-17 00:48:08.497368 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-17 00:48:08.497376 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-17 00:48:08.497383 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-17 00:48:08.497391 | orchestrator | 2025-05-17 00:48:08.497399 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-17 00:48:08.497407 | orchestrator | Saturday 17 May 2025 00:47:09 +0000 (0:00:02.404) 0:01:22.925 ********** 2025-05-17 00:48:08.497419 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.497428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.497437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.497445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.497453 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.497480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497509 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.497518 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497526 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497544 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-17 00:48:08.497553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497582 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497591 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497603 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497616 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497658 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:48:08.497677 | orchestrator | 2025-05-17 00:48:08.497691 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-17 00:48:08.497704 | orchestrator | Saturday 17 May 2025 00:47:13 +0000 (0:00:03.677) 0:01:26.603 ********** 2025-05-17 00:48:08.497716 | orchestrator | changed: [testbed-manager] 2025-05-17 00:48:08.497737 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:48:08.497751 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:48:08.497765 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:48:08.497778 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:48:08.497791 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:48:08.497805 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:48:08.497817 | orchestrator | 2025-05-17 00:48:08.497831 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-17 00:48:08.497844 | orchestrator | Saturday 17 May 2025 00:47:15 +0000 (0:00:01.884) 0:01:28.487 ********** 2025-05-17 00:48:08.497858 | orchestrator | changed: [testbed-manager] 2025-05-17 00:48:08.497871 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:48:08.497885 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:48:08.497898 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:48:08.497911 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:48:08.497924 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:48:08.497937 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:48:08.497951 | orchestrator | 2025-05-17 00:48:08.497964 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-17 00:48:08.497999 | orchestrator | Saturday 17 May 2025 00:47:16 +0000 (0:00:01.620) 0:01:30.108 ********** 2025-05-17 00:48:08.498012 | orchestrator | 2025-05-17 00:48:08.498067 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-17 00:48:08.498080 | orchestrator | Saturday 17 May 2025 00:47:16 +0000 (0:00:00.060) 0:01:30.168 ********** 2025-05-17 00:48:08.498094 | orchestrator | 2025-05-17 00:48:08.498108 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-17 00:48:08.498121 | orchestrator | Saturday 17 May 2025 00:47:16 +0000 (0:00:00.051) 0:01:30.219 ********** 2025-05-17 00:48:08.498135 | orchestrator | 2025-05-17 00:48:08.498148 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-17 00:48:08.498169 | orchestrator | Saturday 17 May 2025 00:47:16 +0000 (0:00:00.052) 0:01:30.271 ********** 2025-05-17 00:48:08.498181 | orchestrator | 2025-05-17 00:48:08.498195 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-17 00:48:08.498208 | orchestrator | Saturday 17 May 2025 00:47:17 +0000 (0:00:00.234) 0:01:30.506 ********** 2025-05-17 00:48:08.498221 | orchestrator | 2025-05-17 00:48:08.498234 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-17 00:48:08.498249 | orchestrator | Saturday 17 May 2025 00:47:17 +0000 (0:00:00.052) 0:01:30.559 ********** 2025-05-17 00:48:08.498263 | orchestrator | 2025-05-17 00:48:08.498277 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-17 00:48:08.498290 | orchestrator | Saturday 17 May 2025 00:47:17 +0000 (0:00:00.053) 0:01:30.612 ********** 2025-05-17 00:48:08.498303 | orchestrator | 2025-05-17 00:48:08.498316 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-17 00:48:08.498330 | orchestrator | Saturday 17 May 2025 00:47:17 +0000 (0:00:00.066) 0:01:30.678 ********** 2025-05-17 00:48:08.498353 | orchestrator | changed: [testbed-manager] 2025-05-17 00:48:08.498366 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:48:08.498380 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:48:08.498392 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:48:08.498406 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:48:08.498419 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:48:08.498432 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:48:08.498445 | orchestrator | 2025-05-17 00:48:08.498459 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-17 00:48:08.498472 | orchestrator | Saturday 17 May 2025 00:47:25 +0000 (0:00:08.414) 0:01:39.093 ********** 2025-05-17 00:48:08.498486 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:48:08.498499 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:48:08.498513 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:48:08.498527 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:48:08.498541 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:48:08.498555 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:48:08.498568 | orchestrator | changed: [testbed-manager] 2025-05-17 00:48:08.498581 | orchestrator | 2025-05-17 00:48:08.498595 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-17 00:48:08.498608 | orchestrator | Saturday 17 May 2025 00:47:53 +0000 (0:00:27.664) 0:02:06.758 ********** 2025-05-17 00:48:08.498621 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:48:08.498636 | orchestrator | ok: [testbed-manager] 2025-05-17 00:48:08.498649 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:48:08.498663 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:48:08.498676 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:48:08.498690 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:48:08.498703 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:48:08.498717 | orchestrator | 2025-05-17 00:48:08.498731 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-17 00:48:08.498746 | orchestrator | Saturday 17 May 2025 00:47:55 +0000 (0:00:02.130) 0:02:08.888 ********** 2025-05-17 00:48:08.498760 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:48:08.498775 | orchestrator | changed: [testbed-manager] 2025-05-17 00:48:08.498789 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:48:08.498803 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:48:08.498817 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:48:08.498832 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:48:08.498847 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:48:08.498861 | orchestrator | 2025-05-17 00:48:08.498876 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:48:08.498891 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-17 00:48:08.498907 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-17 00:48:08.498921 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-17 00:48:08.498948 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-17 00:48:08.498962 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-17 00:48:08.498991 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-17 00:48:08.499006 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-17 00:48:08.499029 | orchestrator | 2025-05-17 00:48:08.499042 | orchestrator | 2025-05-17 00:48:08.499056 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 00:48:08.499069 | orchestrator | Saturday 17 May 2025 00:48:05 +0000 (0:00:09.955) 0:02:18.843 ********** 2025-05-17 00:48:08.499082 | orchestrator | =============================================================================== 2025-05-17 00:48:08.499095 | orchestrator | common : Ensure fluentd image is present for label check --------------- 34.34s 2025-05-17 00:48:08.499108 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 27.66s 2025-05-17 00:48:08.499121 | orchestrator | common : Restart cron container ----------------------------------------- 9.96s 2025-05-17 00:48:08.499134 | orchestrator | common : Restart fluentd container -------------------------------------- 8.41s 2025-05-17 00:48:08.499146 | orchestrator | common : Copying over config.json files for services -------------------- 5.47s 2025-05-17 00:48:08.499159 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.61s 2025-05-17 00:48:08.499178 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 3.80s 2025-05-17 00:48:08.499192 | orchestrator | common : Check common containers ---------------------------------------- 3.68s 2025-05-17 00:48:08.499204 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.65s 2025-05-17 00:48:08.499218 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 3.38s 2025-05-17 00:48:08.499230 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.07s 2025-05-17 00:48:08.499243 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.41s 2025-05-17 00:48:08.499257 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.40s 2025-05-17 00:48:08.499269 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.21s 2025-05-17 00:48:08.499282 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.21s 2025-05-17 00:48:08.499296 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.13s 2025-05-17 00:48:08.499309 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.94s 2025-05-17 00:48:08.499321 | orchestrator | common : Creating log volume -------------------------------------------- 1.88s 2025-05-17 00:48:08.499334 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.86s 2025-05-17 00:48:08.499346 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.62s 2025-05-17 00:48:08.499358 | orchestrator | 2025-05-17 00:48:08 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:48:08.499371 | orchestrator | 2025-05-17 00:48:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:08.499383 | orchestrator | 2025-05-17 00:48:08 | INFO  | Task a3d536f9-0635-40b0-81ef-22e399c0e3d0 is in state STARTED 2025-05-17 00:48:08.499394 | orchestrator | 2025-05-17 00:48:08 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:48:08.499406 | orchestrator | 2025-05-17 00:48:08 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:08.499418 | orchestrator | 2025-05-17 00:48:08 | INFO  | Task 2b508138-22a8-4156-8f1b-db2c0a509f76 is in state STARTED 2025-05-17 00:48:08.499436 | orchestrator | 2025-05-17 00:48:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:11.533841 | orchestrator | 2025-05-17 00:48:11 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:48:11.533953 | orchestrator | 2025-05-17 00:48:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:11.534082 | orchestrator | 2025-05-17 00:48:11 | INFO  | Task a3d536f9-0635-40b0-81ef-22e399c0e3d0 is in state STARTED 2025-05-17 00:48:11.534434 | orchestrator | 2025-05-17 00:48:11 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:48:11.535404 | orchestrator | 2025-05-17 00:48:11 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:11.536182 | orchestrator | 2025-05-17 00:48:11 | INFO  | Task 2b508138-22a8-4156-8f1b-db2c0a509f76 is in state STARTED 2025-05-17 00:48:11.536636 | orchestrator | 2025-05-17 00:48:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:14.576189 | orchestrator | 2025-05-17 00:48:14 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:48:14.576886 | orchestrator | 2025-05-17 00:48:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:14.580656 | orchestrator | 2025-05-17 00:48:14 | INFO  | Task a3d536f9-0635-40b0-81ef-22e399c0e3d0 is in state STARTED 2025-05-17 00:48:14.581605 | orchestrator | 2025-05-17 00:48:14 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:48:14.582199 | orchestrator | 2025-05-17 00:48:14 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:14.586520 | orchestrator | 2025-05-17 00:48:14 | INFO  | Task 2b508138-22a8-4156-8f1b-db2c0a509f76 is in state STARTED 2025-05-17 00:48:14.586592 | orchestrator | 2025-05-17 00:48:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:17.651338 | orchestrator | 2025-05-17 00:48:17 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:48:17.656171 | orchestrator | 2025-05-17 00:48:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:17.658189 | orchestrator | 2025-05-17 00:48:17 | INFO  | Task a3d536f9-0635-40b0-81ef-22e399c0e3d0 is in state STARTED 2025-05-17 00:48:17.662657 | orchestrator | 2025-05-17 00:48:17 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:48:17.665088 | orchestrator | 2025-05-17 00:48:17 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:17.667385 | orchestrator | 2025-05-17 00:48:17 | INFO  | Task 2b508138-22a8-4156-8f1b-db2c0a509f76 is in state STARTED 2025-05-17 00:48:17.667446 | orchestrator | 2025-05-17 00:48:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:20.705478 | orchestrator | 2025-05-17 00:48:20 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:48:20.707426 | orchestrator | 2025-05-17 00:48:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:20.709596 | orchestrator | 2025-05-17 00:48:20 | INFO  | Task a3d536f9-0635-40b0-81ef-22e399c0e3d0 is in state STARTED 2025-05-17 00:48:20.711757 | orchestrator | 2025-05-17 00:48:20 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:48:20.712692 | orchestrator | 2025-05-17 00:48:20 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:20.713231 | orchestrator | 2025-05-17 00:48:20 | INFO  | Task 2b508138-22a8-4156-8f1b-db2c0a509f76 is in state STARTED 2025-05-17 00:48:20.713274 | orchestrator | 2025-05-17 00:48:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:23.769278 | orchestrator | 2025-05-17 00:48:23 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:48:23.769528 | orchestrator | 2025-05-17 00:48:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:23.771587 | orchestrator | 2025-05-17 00:48:23 | INFO  | Task a3d536f9-0635-40b0-81ef-22e399c0e3d0 is in state STARTED 2025-05-17 00:48:23.773632 | orchestrator | 2025-05-17 00:48:23 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:48:23.776117 | orchestrator | 2025-05-17 00:48:23 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:23.776532 | orchestrator | 2025-05-17 00:48:23 | INFO  | Task 2b508138-22a8-4156-8f1b-db2c0a509f76 is in state STARTED 2025-05-17 00:48:23.776581 | orchestrator | 2025-05-17 00:48:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:26.844958 | orchestrator | 2025-05-17 00:48:26 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:48:26.845151 | orchestrator | 2025-05-17 00:48:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:26.845397 | orchestrator | 2025-05-17 00:48:26 | INFO  | Task a3d536f9-0635-40b0-81ef-22e399c0e3d0 is in state SUCCESS 2025-05-17 00:48:26.845918 | orchestrator | 2025-05-17 00:48:26 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:48:26.848210 | orchestrator | 2025-05-17 00:48:26 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:26.849283 | orchestrator | 2025-05-17 00:48:26 | INFO  | Task 2b508138-22a8-4156-8f1b-db2c0a509f76 is in state STARTED 2025-05-17 00:48:26.849482 | orchestrator | 2025-05-17 00:48:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:29.907077 | orchestrator | 2025-05-17 00:48:29 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:48:29.907386 | orchestrator | 2025-05-17 00:48:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:29.912263 | orchestrator | 2025-05-17 00:48:29 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:48:29.914628 | orchestrator | 2025-05-17 00:48:29 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:48:29.916065 | orchestrator | 2025-05-17 00:48:29 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:29.918279 | orchestrator | 2025-05-17 00:48:29 | INFO  | Task 2b508138-22a8-4156-8f1b-db2c0a509f76 is in state STARTED 2025-05-17 00:48:29.918347 | orchestrator | 2025-05-17 00:48:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:32.959654 | orchestrator | 2025-05-17 00:48:32 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:48:32.959772 | orchestrator | 2025-05-17 00:48:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:32.959788 | orchestrator | 2025-05-17 00:48:32 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:48:32.960191 | orchestrator | 2025-05-17 00:48:32 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:48:32.960811 | orchestrator | 2025-05-17 00:48:32 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:32.963075 | orchestrator | 2025-05-17 00:48:32 | INFO  | Task 2b508138-22a8-4156-8f1b-db2c0a509f76 is in state STARTED 2025-05-17 00:48:32.963159 | orchestrator | 2025-05-17 00:48:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:35.995567 | orchestrator | 2025-05-17 00:48:35 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:48:35.995715 | orchestrator | 2025-05-17 00:48:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:35.999480 | orchestrator | 2025-05-17 00:48:35 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:48:36.000619 | orchestrator | 2025-05-17 00:48:35 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:48:36.001758 | orchestrator | 2025-05-17 00:48:36 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:36.002710 | orchestrator | 2025-05-17 00:48:36 | INFO  | Task 2b508138-22a8-4156-8f1b-db2c0a509f76 is in state STARTED 2025-05-17 00:48:36.002736 | orchestrator | 2025-05-17 00:48:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:39.037761 | orchestrator | 2025-05-17 00:48:39 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:48:39.037910 | orchestrator | 2025-05-17 00:48:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:39.038482 | orchestrator | 2025-05-17 00:48:39 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:48:39.039211 | orchestrator | 2025-05-17 00:48:39 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:48:39.039458 | orchestrator | 2025-05-17 00:48:39 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:39.040104 | orchestrator | 2025-05-17 00:48:39 | INFO  | Task 2b508138-22a8-4156-8f1b-db2c0a509f76 is in state STARTED 2025-05-17 00:48:39.040134 | orchestrator | 2025-05-17 00:48:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:42.064555 | orchestrator | 2025-05-17 00:48:42 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:48:42.064687 | orchestrator | 2025-05-17 00:48:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:42.065330 | orchestrator | 2025-05-17 00:48:42 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:48:42.065821 | orchestrator | 2025-05-17 00:48:42 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:48:42.066455 | orchestrator | 2025-05-17 00:48:42 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:42.066953 | orchestrator | 2025-05-17 00:48:42 | INFO  | Task 2b508138-22a8-4156-8f1b-db2c0a509f76 is in state SUCCESS 2025-05-17 00:48:42.067099 | orchestrator | 2025-05-17 00:48:42.067117 | orchestrator | 2025-05-17 00:48:42.067125 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 00:48:42.067131 | orchestrator | 2025-05-17 00:48:42.067135 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 00:48:42.067140 | orchestrator | Saturday 17 May 2025 00:48:11 +0000 (0:00:00.446) 0:00:00.446 ********** 2025-05-17 00:48:42.067144 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:48:42.067150 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:48:42.067154 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:48:42.067158 | orchestrator | 2025-05-17 00:48:42.067162 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 00:48:42.067167 | orchestrator | Saturday 17 May 2025 00:48:12 +0000 (0:00:00.627) 0:00:01.073 ********** 2025-05-17 00:48:42.067171 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-17 00:48:42.067175 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-17 00:48:42.067179 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-17 00:48:42.067183 | orchestrator | 2025-05-17 00:48:42.067187 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-17 00:48:42.067190 | orchestrator | 2025-05-17 00:48:42.067194 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-17 00:48:42.067198 | orchestrator | Saturday 17 May 2025 00:48:12 +0000 (0:00:00.323) 0:00:01.397 ********** 2025-05-17 00:48:42.067202 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:48:42.067207 | orchestrator | 2025-05-17 00:48:42.067238 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-17 00:48:42.067245 | orchestrator | Saturday 17 May 2025 00:48:13 +0000 (0:00:00.772) 0:00:02.170 ********** 2025-05-17 00:48:42.067251 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-17 00:48:42.067259 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-17 00:48:42.067265 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-17 00:48:42.067271 | orchestrator | 2025-05-17 00:48:42.067289 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-17 00:48:42.067295 | orchestrator | Saturday 17 May 2025 00:48:14 +0000 (0:00:00.900) 0:00:03.071 ********** 2025-05-17 00:48:42.067301 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-17 00:48:42.067307 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-17 00:48:42.067313 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-17 00:48:42.067319 | orchestrator | 2025-05-17 00:48:42.067326 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-17 00:48:42.067334 | orchestrator | Saturday 17 May 2025 00:48:17 +0000 (0:00:03.214) 0:00:06.285 ********** 2025-05-17 00:48:42.067341 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:48:42.067349 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:48:42.067355 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:48:42.067361 | orchestrator | 2025-05-17 00:48:42.067367 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-17 00:48:42.067374 | orchestrator | Saturday 17 May 2025 00:48:20 +0000 (0:00:03.673) 0:00:09.958 ********** 2025-05-17 00:48:42.067407 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:48:42.067413 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:48:42.067417 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:48:42.067420 | orchestrator | 2025-05-17 00:48:42.067424 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:48:42.067428 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:48:42.067434 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:48:42.067504 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:48:42.067508 | orchestrator | 2025-05-17 00:48:42.067512 | orchestrator | 2025-05-17 00:48:42.067516 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 00:48:42.067520 | orchestrator | Saturday 17 May 2025 00:48:25 +0000 (0:00:05.033) 0:00:14.992 ********** 2025-05-17 00:48:42.067527 | orchestrator | =============================================================================== 2025-05-17 00:48:42.067533 | orchestrator | memcached : Restart memcached container --------------------------------- 5.03s 2025-05-17 00:48:42.067539 | orchestrator | memcached : Check memcached container ----------------------------------- 3.67s 2025-05-17 00:48:42.067546 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.21s 2025-05-17 00:48:42.067553 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.90s 2025-05-17 00:48:42.067559 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.77s 2025-05-17 00:48:42.067566 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.63s 2025-05-17 00:48:42.067573 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.32s 2025-05-17 00:48:42.067582 | orchestrator | 2025-05-17 00:48:42.068241 | orchestrator | 2025-05-17 00:48:42.068269 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 00:48:42.068277 | orchestrator | 2025-05-17 00:48:42.068283 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 00:48:42.068290 | orchestrator | Saturday 17 May 2025 00:48:10 +0000 (0:00:00.341) 0:00:00.341 ********** 2025-05-17 00:48:42.068304 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:48:42.068311 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:48:42.068318 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:48:42.068325 | orchestrator | 2025-05-17 00:48:42.068331 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 00:48:42.068338 | orchestrator | Saturday 17 May 2025 00:48:11 +0000 (0:00:00.438) 0:00:00.780 ********** 2025-05-17 00:48:42.068344 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-17 00:48:42.068350 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-17 00:48:42.068357 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-17 00:48:42.068362 | orchestrator | 2025-05-17 00:48:42.068366 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-17 00:48:42.068369 | orchestrator | 2025-05-17 00:48:42.068373 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-17 00:48:42.068377 | orchestrator | Saturday 17 May 2025 00:48:11 +0000 (0:00:00.391) 0:00:01.172 ********** 2025-05-17 00:48:42.068380 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:48:42.068385 | orchestrator | 2025-05-17 00:48:42.068388 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-17 00:48:42.068392 | orchestrator | Saturday 17 May 2025 00:48:12 +0000 (0:00:00.817) 0:00:01.989 ********** 2025-05-17 00:48:42.068398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068444 | orchestrator | 2025-05-17 00:48:42.068448 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-17 00:48:42.068454 | orchestrator | Saturday 17 May 2025 00:48:14 +0000 (0:00:01.574) 0:00:03.564 ********** 2025-05-17 00:48:42.068461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068515 | orchestrator | 2025-05-17 00:48:42.068522 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-17 00:48:42.068528 | orchestrator | Saturday 17 May 2025 00:48:18 +0000 (0:00:04.063) 0:00:07.628 ********** 2025-05-17 00:48:42.068534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068580 | orchestrator | 2025-05-17 00:48:42.068584 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-17 00:48:42.068587 | orchestrator | Saturday 17 May 2025 00:48:22 +0000 (0:00:04.762) 0:00:12.390 ********** 2025-05-17 00:48:42.068591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-17 00:48:42.068628 | orchestrator | 2025-05-17 00:48:42.068635 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-17 00:48:42.068641 | orchestrator | Saturday 17 May 2025 00:48:25 +0000 (0:00:02.914) 0:00:15.305 ********** 2025-05-17 00:48:42.068648 | orchestrator | 2025-05-17 00:48:42.068654 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-17 00:48:42.068660 | orchestrator | Saturday 17 May 2025 00:48:25 +0000 (0:00:00.095) 0:00:15.400 ********** 2025-05-17 00:48:42.068668 | orchestrator | 2025-05-17 00:48:42.068674 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-17 00:48:42.068682 | orchestrator | Saturday 17 May 2025 00:48:26 +0000 (0:00:00.071) 0:00:15.471 ********** 2025-05-17 00:48:42.068690 | orchestrator | 2025-05-17 00:48:42.068696 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-17 00:48:42.068703 | orchestrator | Saturday 17 May 2025 00:48:26 +0000 (0:00:00.074) 0:00:15.546 ********** 2025-05-17 00:48:42.068709 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:48:42.068716 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:48:42.068723 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:48:42.068729 | orchestrator | 2025-05-17 00:48:42.068735 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-17 00:48:42.068742 | orchestrator | Saturday 17 May 2025 00:48:31 +0000 (0:00:05.048) 0:00:20.595 ********** 2025-05-17 00:48:42.068748 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:48:42.068754 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:48:42.068760 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:48:42.068766 | orchestrator | 2025-05-17 00:48:42.068770 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:48:42.068774 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:48:42.068781 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:48:42.068785 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:48:42.068792 | orchestrator | 2025-05-17 00:48:42.068796 | orchestrator | 2025-05-17 00:48:42.068799 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 00:48:42.068803 | orchestrator | Saturday 17 May 2025 00:48:41 +0000 (0:00:10.147) 0:00:30.743 ********** 2025-05-17 00:48:42.068807 | orchestrator | =============================================================================== 2025-05-17 00:48:42.068811 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.15s 2025-05-17 00:48:42.068815 | orchestrator | redis : Restart redis container ----------------------------------------- 5.05s 2025-05-17 00:48:42.068819 | orchestrator | redis : Copying over redis config files --------------------------------- 4.76s 2025-05-17 00:48:42.068826 | orchestrator | redis : Copying over default config.json files -------------------------- 4.06s 2025-05-17 00:48:42.068832 | orchestrator | redis : Check redis containers ------------------------------------------ 2.91s 2025-05-17 00:48:42.068838 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.57s 2025-05-17 00:48:42.068845 | orchestrator | redis : include_tasks --------------------------------------------------- 0.82s 2025-05-17 00:48:42.068853 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2025-05-17 00:48:42.068861 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2025-05-17 00:48:42.068870 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2025-05-17 00:48:42.068879 | orchestrator | 2025-05-17 00:48:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:45.095230 | orchestrator | 2025-05-17 00:48:45 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:48:45.096966 | orchestrator | 2025-05-17 00:48:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:45.098630 | orchestrator | 2025-05-17 00:48:45 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:48:45.099315 | orchestrator | 2025-05-17 00:48:45 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:48:45.100697 | orchestrator | 2025-05-17 00:48:45 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:45.100750 | orchestrator | 2025-05-17 00:48:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:48.131089 | orchestrator | 2025-05-17 00:48:48 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:48:48.131968 | orchestrator | 2025-05-17 00:48:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:48.132914 | orchestrator | 2025-05-17 00:48:48 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:48:48.133676 | orchestrator | 2025-05-17 00:48:48 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:48:48.135246 | orchestrator | 2025-05-17 00:48:48 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:48.135281 | orchestrator | 2025-05-17 00:48:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:51.177553 | orchestrator | 2025-05-17 00:48:51 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:48:51.177970 | orchestrator | 2025-05-17 00:48:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:51.179730 | orchestrator | 2025-05-17 00:48:51 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:48:51.181807 | orchestrator | 2025-05-17 00:48:51 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:48:51.183258 | orchestrator | 2025-05-17 00:48:51 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:51.183464 | orchestrator | 2025-05-17 00:48:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:54.222328 | orchestrator | 2025-05-17 00:48:54 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:48:54.223377 | orchestrator | 2025-05-17 00:48:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:54.226279 | orchestrator | 2025-05-17 00:48:54 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:48:54.227226 | orchestrator | 2025-05-17 00:48:54 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:48:54.228096 | orchestrator | 2025-05-17 00:48:54 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:54.228285 | orchestrator | 2025-05-17 00:48:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:48:57.261226 | orchestrator | 2025-05-17 00:48:57 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:48:57.261686 | orchestrator | 2025-05-17 00:48:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:48:57.262960 | orchestrator | 2025-05-17 00:48:57 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:48:57.263786 | orchestrator | 2025-05-17 00:48:57 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:48:57.264556 | orchestrator | 2025-05-17 00:48:57 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:48:57.264604 | orchestrator | 2025-05-17 00:48:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:00.300933 | orchestrator | 2025-05-17 00:49:00 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:00.302497 | orchestrator | 2025-05-17 00:49:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:00.302537 | orchestrator | 2025-05-17 00:49:00 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:00.302954 | orchestrator | 2025-05-17 00:49:00 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:49:00.303642 | orchestrator | 2025-05-17 00:49:00 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:00.303674 | orchestrator | 2025-05-17 00:49:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:03.344200 | orchestrator | 2025-05-17 00:49:03 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:03.345073 | orchestrator | 2025-05-17 00:49:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:03.347780 | orchestrator | 2025-05-17 00:49:03 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:03.351879 | orchestrator | 2025-05-17 00:49:03 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:49:03.355390 | orchestrator | 2025-05-17 00:49:03 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:03.355435 | orchestrator | 2025-05-17 00:49:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:06.393547 | orchestrator | 2025-05-17 00:49:06 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:06.394316 | orchestrator | 2025-05-17 00:49:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:06.394654 | orchestrator | 2025-05-17 00:49:06 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:06.395686 | orchestrator | 2025-05-17 00:49:06 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:49:06.396151 | orchestrator | 2025-05-17 00:49:06 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:06.396186 | orchestrator | 2025-05-17 00:49:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:09.434533 | orchestrator | 2025-05-17 00:49:09 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:09.435741 | orchestrator | 2025-05-17 00:49:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:09.437316 | orchestrator | 2025-05-17 00:49:09 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:09.438921 | orchestrator | 2025-05-17 00:49:09 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:49:09.440090 | orchestrator | 2025-05-17 00:49:09 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:09.440174 | orchestrator | 2025-05-17 00:49:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:12.485763 | orchestrator | 2025-05-17 00:49:12 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:12.486582 | orchestrator | 2025-05-17 00:49:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:12.486616 | orchestrator | 2025-05-17 00:49:12 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:12.487612 | orchestrator | 2025-05-17 00:49:12 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:49:12.488831 | orchestrator | 2025-05-17 00:49:12 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:12.488876 | orchestrator | 2025-05-17 00:49:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:15.532241 | orchestrator | 2025-05-17 00:49:15 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:15.534159 | orchestrator | 2025-05-17 00:49:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:15.534598 | orchestrator | 2025-05-17 00:49:15 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:15.535940 | orchestrator | 2025-05-17 00:49:15 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:49:15.537119 | orchestrator | 2025-05-17 00:49:15 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:15.537200 | orchestrator | 2025-05-17 00:49:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:18.579098 | orchestrator | 2025-05-17 00:49:18 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:18.584193 | orchestrator | 2025-05-17 00:49:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:18.584845 | orchestrator | 2025-05-17 00:49:18 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:18.585602 | orchestrator | 2025-05-17 00:49:18 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:49:18.586374 | orchestrator | 2025-05-17 00:49:18 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:18.586409 | orchestrator | 2025-05-17 00:49:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:21.628121 | orchestrator | 2025-05-17 00:49:21 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:21.629094 | orchestrator | 2025-05-17 00:49:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:21.629505 | orchestrator | 2025-05-17 00:49:21 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:21.630497 | orchestrator | 2025-05-17 00:49:21 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:49:21.631324 | orchestrator | 2025-05-17 00:49:21 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:21.631436 | orchestrator | 2025-05-17 00:49:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:24.666255 | orchestrator | 2025-05-17 00:49:24 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:24.666418 | orchestrator | 2025-05-17 00:49:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:24.666439 | orchestrator | 2025-05-17 00:49:24 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:24.667795 | orchestrator | 2025-05-17 00:49:24 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:49:24.668408 | orchestrator | 2025-05-17 00:49:24 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:24.668445 | orchestrator | 2025-05-17 00:49:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:27.707283 | orchestrator | 2025-05-17 00:49:27 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:27.708396 | orchestrator | 2025-05-17 00:49:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:27.711023 | orchestrator | 2025-05-17 00:49:27 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:27.712863 | orchestrator | 2025-05-17 00:49:27 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state STARTED 2025-05-17 00:49:27.716984 | orchestrator | 2025-05-17 00:49:27 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:27.717045 | orchestrator | 2025-05-17 00:49:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:30.759278 | orchestrator | 2025-05-17 00:49:30 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:30.760836 | orchestrator | 2025-05-17 00:49:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:30.761373 | orchestrator | 2025-05-17 00:49:30 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:30.762205 | orchestrator | 2025-05-17 00:49:30 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:49:30.763240 | orchestrator | 2025-05-17 00:49:30 | INFO  | Task 81968eb5-59bb-4a31-9598-f4adf3daae51 is in state SUCCESS 2025-05-17 00:49:30.766298 | orchestrator | 2025-05-17 00:49:30.766375 | orchestrator | 2025-05-17 00:49:30.766396 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 00:49:30.766422 | orchestrator | 2025-05-17 00:49:30.766439 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 00:49:30.766454 | orchestrator | Saturday 17 May 2025 00:48:10 +0000 (0:00:00.705) 0:00:00.705 ********** 2025-05-17 00:49:30.766470 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:49:30.766487 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:49:30.766502 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:49:30.766519 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:49:30.766534 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:49:30.766549 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:49:30.766562 | orchestrator | 2025-05-17 00:49:30.766571 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 00:49:30.766603 | orchestrator | Saturday 17 May 2025 00:48:11 +0000 (0:00:01.159) 0:00:01.865 ********** 2025-05-17 00:49:30.766612 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-17 00:49:30.766621 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-17 00:49:30.766630 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-17 00:49:30.766639 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-17 00:49:30.766648 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-17 00:49:30.766656 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-17 00:49:30.766665 | orchestrator | 2025-05-17 00:49:30.766674 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-17 00:49:30.766683 | orchestrator | 2025-05-17 00:49:30.766692 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-17 00:49:30.766701 | orchestrator | Saturday 17 May 2025 00:48:12 +0000 (0:00:00.834) 0:00:02.699 ********** 2025-05-17 00:49:30.766710 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:49:30.766721 | orchestrator | 2025-05-17 00:49:30.766730 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-17 00:49:30.766739 | orchestrator | Saturday 17 May 2025 00:48:13 +0000 (0:00:01.534) 0:00:04.234 ********** 2025-05-17 00:49:30.766748 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-17 00:49:30.766757 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-17 00:49:30.766765 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-17 00:49:30.766774 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-17 00:49:30.766782 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-17 00:49:30.766791 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-17 00:49:30.766800 | orchestrator | 2025-05-17 00:49:30.766808 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-17 00:49:30.766817 | orchestrator | Saturday 17 May 2025 00:48:15 +0000 (0:00:02.110) 0:00:06.345 ********** 2025-05-17 00:49:30.766826 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-17 00:49:30.766835 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-17 00:49:30.766843 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-17 00:49:30.766852 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-17 00:49:30.766861 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-17 00:49:30.766871 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-17 00:49:30.766881 | orchestrator | 2025-05-17 00:49:30.766892 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-17 00:49:30.766903 | orchestrator | Saturday 17 May 2025 00:48:19 +0000 (0:00:03.250) 0:00:09.595 ********** 2025-05-17 00:49:30.766913 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-17 00:49:30.766923 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:49:30.766934 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-17 00:49:30.766945 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:49:30.766955 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-17 00:49:30.766966 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:49:30.766976 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-17 00:49:30.766987 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:49:30.767043 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-17 00:49:30.767052 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:49:30.767062 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-17 00:49:30.767079 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:49:30.767090 | orchestrator | 2025-05-17 00:49:30.767100 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-17 00:49:30.767110 | orchestrator | Saturday 17 May 2025 00:48:22 +0000 (0:00:03.177) 0:00:12.773 ********** 2025-05-17 00:49:30.767120 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:49:30.767131 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:49:30.767141 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:49:30.767151 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:49:30.767162 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:49:30.767177 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:49:30.767193 | orchestrator | 2025-05-17 00:49:30.767219 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-17 00:49:30.767234 | orchestrator | Saturday 17 May 2025 00:48:23 +0000 (0:00:01.561) 0:00:14.334 ********** 2025-05-17 00:49:30.767283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767342 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767411 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767441 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767464 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767492 | orchestrator | 2025-05-17 00:49:30.767502 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-17 00:49:30.767511 | orchestrator | Saturday 17 May 2025 00:48:27 +0000 (0:00:03.248) 0:00:17.582 ********** 2025-05-17 00:49:30.767520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767583 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767647 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767664 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767679 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767715 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767725 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.767734 | orchestrator | 2025-05-17 00:49:30.767743 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-05-17 00:49:30.767753 | orchestrator | Saturday 17 May 2025 00:48:31 +0000 (0:00:04.606) 0:00:22.189 ********** 2025-05-17 00:49:30.767762 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:49:30.767771 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:49:30.767780 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:49:30.767789 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:49:30.767797 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:49:30.767806 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:49:30.767815 | orchestrator | 2025-05-17 00:49:30.767823 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-05-17 00:49:30.767832 | orchestrator | Saturday 17 May 2025 00:48:34 +0000 (0:00:03.250) 0:00:25.439 ********** 2025-05-17 00:49:30.767841 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:49:30.767849 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:49:30.767858 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:49:30.767866 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:49:30.767875 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:49:30.767884 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:49:30.767898 | orchestrator | 2025-05-17 00:49:30.767907 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-17 00:49:30.767915 | orchestrator | Saturday 17 May 2025 00:48:37 +0000 (0:00:02.214) 0:00:27.654 ********** 2025-05-17 00:49:30.767924 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:49:30.767932 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:49:30.767941 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:49:30.767950 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:49:30.767958 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:49:30.767967 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:49:30.767975 | orchestrator | 2025-05-17 00:49:30.767984 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-17 00:49:30.768020 | orchestrator | Saturday 17 May 2025 00:48:38 +0000 (0:00:00.986) 0:00:28.640 ********** 2025-05-17 00:49:30.768030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.768040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.768054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.768064 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.768073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.768088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.768097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.768117 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-17 00:49:30.768137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.768148 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.768162 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.768171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-17 00:49:30.768180 | orchestrator | 2025-05-17 00:49:30.768189 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-17 00:49:30.768198 | orchestrator | Saturday 17 May 2025 00:48:40 +0000 (0:00:02.666) 0:00:31.307 ********** 2025-05-17 00:49:30.768207 | orchestrator | 2025-05-17 00:49:30.768215 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-17 00:49:30.768224 | orchestrator | Saturday 17 May 2025 00:48:40 +0000 (0:00:00.088) 0:00:31.395 ********** 2025-05-17 00:49:30.768232 | orchestrator | 2025-05-17 00:49:30.768241 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-17 00:49:30.768250 | orchestrator | Saturday 17 May 2025 00:48:41 +0000 (0:00:00.205) 0:00:31.601 ********** 2025-05-17 00:49:30.768258 | orchestrator | 2025-05-17 00:49:30.768267 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-17 00:49:30.768275 | orchestrator | Saturday 17 May 2025 00:48:41 +0000 (0:00:00.091) 0:00:31.692 ********** 2025-05-17 00:49:30.768283 | orchestrator | 2025-05-17 00:49:30.768292 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-17 00:49:30.768301 | orchestrator | Saturday 17 May 2025 00:48:41 +0000 (0:00:00.175) 0:00:31.867 ********** 2025-05-17 00:49:30.768310 | orchestrator | 2025-05-17 00:49:30.768318 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-17 00:49:30.768326 | orchestrator | Saturday 17 May 2025 00:48:41 +0000 (0:00:00.149) 0:00:32.016 ********** 2025-05-17 00:49:30.768340 | orchestrator | 2025-05-17 00:49:30.768362 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-17 00:49:30.768380 | orchestrator | Saturday 17 May 2025 00:48:41 +0000 (0:00:00.343) 0:00:32.360 ********** 2025-05-17 00:49:30.768395 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:49:30.768409 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:49:30.768424 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:49:30.768438 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:49:30.768454 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:49:30.768468 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:49:30.768481 | orchestrator | 2025-05-17 00:49:30.768503 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-17 00:49:30.768512 | orchestrator | Saturday 17 May 2025 00:48:52 +0000 (0:00:10.289) 0:00:42.649 ********** 2025-05-17 00:49:30.768528 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:49:30.768545 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:49:30.768554 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:49:30.768563 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:49:30.768571 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:49:30.768580 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:49:30.768588 | orchestrator | 2025-05-17 00:49:30.768597 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-17 00:49:30.768606 | orchestrator | Saturday 17 May 2025 00:48:54 +0000 (0:00:02.134) 0:00:44.783 ********** 2025-05-17 00:49:30.768614 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:49:30.768623 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:49:30.768632 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:49:30.768640 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:49:30.768649 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:49:30.768658 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:49:30.768673 | orchestrator | 2025-05-17 00:49:30.768696 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-17 00:49:30.768712 | orchestrator | Saturday 17 May 2025 00:49:04 +0000 (0:00:09.888) 0:00:54.672 ********** 2025-05-17 00:49:30.768727 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-17 00:49:30.768742 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-17 00:49:30.768757 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-17 00:49:30.768773 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-17 00:49:30.768787 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-17 00:49:30.768802 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-17 00:49:30.768812 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-17 00:49:30.768820 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-17 00:49:30.768829 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-17 00:49:30.768838 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-17 00:49:30.768846 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-17 00:49:30.768855 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-17 00:49:30.768863 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-17 00:49:30.768872 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-17 00:49:30.768881 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-17 00:49:30.768889 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-17 00:49:30.768898 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-17 00:49:30.768907 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-17 00:49:30.768915 | orchestrator | 2025-05-17 00:49:30.768924 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-17 00:49:30.768933 | orchestrator | Saturday 17 May 2025 00:49:12 +0000 (0:00:08.035) 0:01:02.707 ********** 2025-05-17 00:49:30.768950 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-17 00:49:30.768959 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:49:30.768968 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-17 00:49:30.768977 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:49:30.768986 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-17 00:49:30.769068 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:49:30.769087 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-17 00:49:30.769102 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-17 00:49:30.769117 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-17 00:49:30.769126 | orchestrator | 2025-05-17 00:49:30.769134 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-17 00:49:30.769143 | orchestrator | Saturday 17 May 2025 00:49:14 +0000 (0:00:02.640) 0:01:05.348 ********** 2025-05-17 00:49:30.769152 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-17 00:49:30.769161 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:49:30.769169 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-17 00:49:30.769178 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:49:30.769187 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-17 00:49:30.769195 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:49:30.769210 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-17 00:49:30.769226 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-17 00:49:30.769235 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-17 00:49:30.769244 | orchestrator | 2025-05-17 00:49:30.769252 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-17 00:49:30.769261 | orchestrator | Saturday 17 May 2025 00:49:18 +0000 (0:00:03.769) 0:01:09.117 ********** 2025-05-17 00:49:30.769269 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:49:30.769278 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:49:30.769287 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:49:30.769302 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:49:30.769323 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:49:30.769342 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:49:30.769357 | orchestrator | 2025-05-17 00:49:30.769380 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:49:30.769396 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-17 00:49:30.769441 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-17 00:49:30.769457 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-17 00:49:30.769469 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-17 00:49:30.769485 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-17 00:49:30.769508 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-17 00:49:30.769524 | orchestrator | 2025-05-17 00:49:30.769539 | orchestrator | 2025-05-17 00:49:30.769553 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 00:49:30.769567 | orchestrator | Saturday 17 May 2025 00:49:27 +0000 (0:00:08.712) 0:01:17.829 ********** 2025-05-17 00:49:30.769581 | orchestrator | =============================================================================== 2025-05-17 00:49:30.769606 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.60s 2025-05-17 00:49:30.769621 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.29s 2025-05-17 00:49:30.769636 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.04s 2025-05-17 00:49:30.769649 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.61s 2025-05-17 00:49:30.769658 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.77s 2025-05-17 00:49:30.769669 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.25s 2025-05-17 00:49:30.769684 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 3.25s 2025-05-17 00:49:30.769699 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.25s 2025-05-17 00:49:30.769730 | orchestrator | module-load : Drop module persistence ----------------------------------- 3.18s 2025-05-17 00:49:30.769758 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.67s 2025-05-17 00:49:30.769772 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.64s 2025-05-17 00:49:30.769785 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.21s 2025-05-17 00:49:30.769794 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.13s 2025-05-17 00:49:30.769802 | orchestrator | module-load : Load modules ---------------------------------------------- 2.11s 2025-05-17 00:49:30.769811 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.56s 2025-05-17 00:49:30.769819 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.53s 2025-05-17 00:49:30.769828 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.16s 2025-05-17 00:49:30.769837 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.05s 2025-05-17 00:49:30.769845 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.99s 2025-05-17 00:49:30.769854 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2025-05-17 00:49:30.770057 | orchestrator | 2025-05-17 00:49:30 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:30.770088 | orchestrator | 2025-05-17 00:49:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:33.809730 | orchestrator | 2025-05-17 00:49:33 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:33.811179 | orchestrator | 2025-05-17 00:49:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:33.814203 | orchestrator | 2025-05-17 00:49:33 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:33.815252 | orchestrator | 2025-05-17 00:49:33 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:49:33.817137 | orchestrator | 2025-05-17 00:49:33 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:33.817256 | orchestrator | 2025-05-17 00:49:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:36.872251 | orchestrator | 2025-05-17 00:49:36 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:36.872907 | orchestrator | 2025-05-17 00:49:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:36.873948 | orchestrator | 2025-05-17 00:49:36 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:36.875139 | orchestrator | 2025-05-17 00:49:36 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:49:36.875957 | orchestrator | 2025-05-17 00:49:36 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:36.876127 | orchestrator | 2025-05-17 00:49:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:39.917086 | orchestrator | 2025-05-17 00:49:39 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:39.917204 | orchestrator | 2025-05-17 00:49:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:39.922929 | orchestrator | 2025-05-17 00:49:39 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:39.922972 | orchestrator | 2025-05-17 00:49:39 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:49:39.922984 | orchestrator | 2025-05-17 00:49:39 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:39.923036 | orchestrator | 2025-05-17 00:49:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:42.957565 | orchestrator | 2025-05-17 00:49:42 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:42.959412 | orchestrator | 2025-05-17 00:49:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:42.960530 | orchestrator | 2025-05-17 00:49:42 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:42.962379 | orchestrator | 2025-05-17 00:49:42 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:49:42.963978 | orchestrator | 2025-05-17 00:49:42 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:42.964236 | orchestrator | 2025-05-17 00:49:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:46.010362 | orchestrator | 2025-05-17 00:49:46 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:46.013072 | orchestrator | 2025-05-17 00:49:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:46.014908 | orchestrator | 2025-05-17 00:49:46 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:46.014956 | orchestrator | 2025-05-17 00:49:46 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:49:46.014976 | orchestrator | 2025-05-17 00:49:46 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:46.014989 | orchestrator | 2025-05-17 00:49:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:49.048823 | orchestrator | 2025-05-17 00:49:49 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:49.050302 | orchestrator | 2025-05-17 00:49:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:49.051254 | orchestrator | 2025-05-17 00:49:49 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:49.051566 | orchestrator | 2025-05-17 00:49:49 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:49:49.053428 | orchestrator | 2025-05-17 00:49:49 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:49.053460 | orchestrator | 2025-05-17 00:49:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:52.093127 | orchestrator | 2025-05-17 00:49:52 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:52.093329 | orchestrator | 2025-05-17 00:49:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:52.094442 | orchestrator | 2025-05-17 00:49:52 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:52.096955 | orchestrator | 2025-05-17 00:49:52 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:49:52.099263 | orchestrator | 2025-05-17 00:49:52 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:52.099284 | orchestrator | 2025-05-17 00:49:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:55.139846 | orchestrator | 2025-05-17 00:49:55 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:55.141937 | orchestrator | 2025-05-17 00:49:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:55.143625 | orchestrator | 2025-05-17 00:49:55 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:55.145795 | orchestrator | 2025-05-17 00:49:55 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:49:55.148703 | orchestrator | 2025-05-17 00:49:55 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:55.148727 | orchestrator | 2025-05-17 00:49:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:49:58.190349 | orchestrator | 2025-05-17 00:49:58 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:49:58.190510 | orchestrator | 2025-05-17 00:49:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:49:58.190530 | orchestrator | 2025-05-17 00:49:58 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:49:58.190626 | orchestrator | 2025-05-17 00:49:58 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:49:58.190880 | orchestrator | 2025-05-17 00:49:58 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:49:58.190905 | orchestrator | 2025-05-17 00:49:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:01.246501 | orchestrator | 2025-05-17 00:50:01 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:01.247680 | orchestrator | 2025-05-17 00:50:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:01.251265 | orchestrator | 2025-05-17 00:50:01 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:50:01.252246 | orchestrator | 2025-05-17 00:50:01 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:01.255744 | orchestrator | 2025-05-17 00:50:01 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:01.255800 | orchestrator | 2025-05-17 00:50:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:04.310809 | orchestrator | 2025-05-17 00:50:04 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:04.311074 | orchestrator | 2025-05-17 00:50:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:04.314642 | orchestrator | 2025-05-17 00:50:04 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:50:04.318266 | orchestrator | 2025-05-17 00:50:04 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:04.320517 | orchestrator | 2025-05-17 00:50:04 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:04.320543 | orchestrator | 2025-05-17 00:50:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:07.368238 | orchestrator | 2025-05-17 00:50:07 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:07.371983 | orchestrator | 2025-05-17 00:50:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:07.372169 | orchestrator | 2025-05-17 00:50:07 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:50:07.378208 | orchestrator | 2025-05-17 00:50:07 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:07.378544 | orchestrator | 2025-05-17 00:50:07 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:07.378568 | orchestrator | 2025-05-17 00:50:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:10.416559 | orchestrator | 2025-05-17 00:50:10 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:10.417032 | orchestrator | 2025-05-17 00:50:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:10.418347 | orchestrator | 2025-05-17 00:50:10 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:50:10.420390 | orchestrator | 2025-05-17 00:50:10 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:10.424691 | orchestrator | 2025-05-17 00:50:10 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:10.424723 | orchestrator | 2025-05-17 00:50:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:13.477887 | orchestrator | 2025-05-17 00:50:13 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:13.478246 | orchestrator | 2025-05-17 00:50:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:13.478701 | orchestrator | 2025-05-17 00:50:13 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:50:13.479918 | orchestrator | 2025-05-17 00:50:13 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:13.480612 | orchestrator | 2025-05-17 00:50:13 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:13.480710 | orchestrator | 2025-05-17 00:50:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:16.533186 | orchestrator | 2025-05-17 00:50:16 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:16.533294 | orchestrator | 2025-05-17 00:50:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:16.533308 | orchestrator | 2025-05-17 00:50:16 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:50:16.536199 | orchestrator | 2025-05-17 00:50:16 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:16.536855 | orchestrator | 2025-05-17 00:50:16 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:16.536878 | orchestrator | 2025-05-17 00:50:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:19.578798 | orchestrator | 2025-05-17 00:50:19 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:19.579146 | orchestrator | 2025-05-17 00:50:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:19.579685 | orchestrator | 2025-05-17 00:50:19 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:50:19.580574 | orchestrator | 2025-05-17 00:50:19 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:19.582131 | orchestrator | 2025-05-17 00:50:19 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:19.582176 | orchestrator | 2025-05-17 00:50:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:22.618617 | orchestrator | 2025-05-17 00:50:22 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:22.618800 | orchestrator | 2025-05-17 00:50:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:22.620469 | orchestrator | 2025-05-17 00:50:22 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:50:22.620880 | orchestrator | 2025-05-17 00:50:22 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:22.622608 | orchestrator | 2025-05-17 00:50:22 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:22.622666 | orchestrator | 2025-05-17 00:50:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:25.651019 | orchestrator | 2025-05-17 00:50:25 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:25.653034 | orchestrator | 2025-05-17 00:50:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:25.653602 | orchestrator | 2025-05-17 00:50:25 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:50:25.654112 | orchestrator | 2025-05-17 00:50:25 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:25.656532 | orchestrator | 2025-05-17 00:50:25 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:25.656602 | orchestrator | 2025-05-17 00:50:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:28.693923 | orchestrator | 2025-05-17 00:50:28 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:28.694667 | orchestrator | 2025-05-17 00:50:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:28.695418 | orchestrator | 2025-05-17 00:50:28 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:50:28.695810 | orchestrator | 2025-05-17 00:50:28 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:28.696554 | orchestrator | 2025-05-17 00:50:28 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:28.696579 | orchestrator | 2025-05-17 00:50:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:31.732471 | orchestrator | 2025-05-17 00:50:31 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:31.732839 | orchestrator | 2025-05-17 00:50:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:31.733927 | orchestrator | 2025-05-17 00:50:31 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:50:31.734827 | orchestrator | 2025-05-17 00:50:31 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:31.739161 | orchestrator | 2025-05-17 00:50:31 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:31.739208 | orchestrator | 2025-05-17 00:50:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:34.807460 | orchestrator | 2025-05-17 00:50:34 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:34.811331 | orchestrator | 2025-05-17 00:50:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:34.812100 | orchestrator | 2025-05-17 00:50:34 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:50:34.812919 | orchestrator | 2025-05-17 00:50:34 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:34.813601 | orchestrator | 2025-05-17 00:50:34 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:34.813625 | orchestrator | 2025-05-17 00:50:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:37.872570 | orchestrator | 2025-05-17 00:50:37 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:37.874197 | orchestrator | 2025-05-17 00:50:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:37.874633 | orchestrator | 2025-05-17 00:50:37 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state STARTED 2025-05-17 00:50:37.876414 | orchestrator | 2025-05-17 00:50:37 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:37.877526 | orchestrator | 2025-05-17 00:50:37 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:37.877563 | orchestrator | 2025-05-17 00:50:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:40.909313 | orchestrator | 2025-05-17 00:50:40 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:40.909852 | orchestrator | 2025-05-17 00:50:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:40.911214 | orchestrator | 2025-05-17 00:50:40.911250 | orchestrator | 2025-05-17 00:50:40.911269 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-17 00:50:40.911290 | orchestrator | 2025-05-17 00:50:40.911309 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-17 00:50:40.911330 | orchestrator | Saturday 17 May 2025 00:48:33 +0000 (0:00:00.128) 0:00:00.128 ********** 2025-05-17 00:50:40.911343 | orchestrator | ok: [localhost] => { 2025-05-17 00:50:40.911358 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-17 00:50:40.911370 | orchestrator | } 2025-05-17 00:50:40.911382 | orchestrator | 2025-05-17 00:50:40.911393 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-17 00:50:40.911404 | orchestrator | Saturday 17 May 2025 00:48:33 +0000 (0:00:00.060) 0:00:00.188 ********** 2025-05-17 00:50:40.911416 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-17 00:50:40.911429 | orchestrator | ...ignoring 2025-05-17 00:50:40.911441 | orchestrator | 2025-05-17 00:50:40.911452 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-17 00:50:40.911462 | orchestrator | Saturday 17 May 2025 00:48:36 +0000 (0:00:02.947) 0:00:03.135 ********** 2025-05-17 00:50:40.911473 | orchestrator | skipping: [localhost] 2025-05-17 00:50:40.911485 | orchestrator | 2025-05-17 00:50:40.911496 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-17 00:50:40.911506 | orchestrator | Saturday 17 May 2025 00:48:36 +0000 (0:00:00.060) 0:00:03.195 ********** 2025-05-17 00:50:40.911517 | orchestrator | ok: [localhost] 2025-05-17 00:50:40.911528 | orchestrator | 2025-05-17 00:50:40.911557 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 00:50:40.911568 | orchestrator | 2025-05-17 00:50:40.911579 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 00:50:40.911590 | orchestrator | Saturday 17 May 2025 00:48:36 +0000 (0:00:00.215) 0:00:03.411 ********** 2025-05-17 00:50:40.911601 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:50:40.911613 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:50:40.911624 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:50:40.911635 | orchestrator | 2025-05-17 00:50:40.911646 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 00:50:40.911657 | orchestrator | Saturday 17 May 2025 00:48:36 +0000 (0:00:00.343) 0:00:03.754 ********** 2025-05-17 00:50:40.911668 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-17 00:50:40.911702 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-17 00:50:40.911714 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-17 00:50:40.911724 | orchestrator | 2025-05-17 00:50:40.911735 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-17 00:50:40.911745 | orchestrator | 2025-05-17 00:50:40.911756 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-17 00:50:40.911767 | orchestrator | Saturday 17 May 2025 00:48:37 +0000 (0:00:00.426) 0:00:04.180 ********** 2025-05-17 00:50:40.911778 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:50:40.911788 | orchestrator | 2025-05-17 00:50:40.911799 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-17 00:50:40.911809 | orchestrator | Saturday 17 May 2025 00:48:38 +0000 (0:00:00.850) 0:00:05.031 ********** 2025-05-17 00:50:40.911823 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:50:40.911835 | orchestrator | 2025-05-17 00:50:40.911848 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-17 00:50:40.911860 | orchestrator | Saturday 17 May 2025 00:48:39 +0000 (0:00:01.160) 0:00:06.191 ********** 2025-05-17 00:50:40.911873 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:50:40.911886 | orchestrator | 2025-05-17 00:50:40.911899 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-17 00:50:40.911912 | orchestrator | Saturday 17 May 2025 00:48:39 +0000 (0:00:00.330) 0:00:06.521 ********** 2025-05-17 00:50:40.911924 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:50:40.911936 | orchestrator | 2025-05-17 00:50:40.911949 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-17 00:50:40.911962 | orchestrator | Saturday 17 May 2025 00:48:40 +0000 (0:00:00.607) 0:00:07.129 ********** 2025-05-17 00:50:40.911975 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:50:40.911986 | orchestrator | 2025-05-17 00:50:40.911997 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-17 00:50:40.912039 | orchestrator | Saturday 17 May 2025 00:48:40 +0000 (0:00:00.322) 0:00:07.451 ********** 2025-05-17 00:50:40.912050 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:50:40.912061 | orchestrator | 2025-05-17 00:50:40.912072 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-17 00:50:40.912083 | orchestrator | Saturday 17 May 2025 00:48:40 +0000 (0:00:00.298) 0:00:07.750 ********** 2025-05-17 00:50:40.912093 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:50:40.912104 | orchestrator | 2025-05-17 00:50:40.912115 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-17 00:50:40.912125 | orchestrator | Saturday 17 May 2025 00:48:41 +0000 (0:00:00.726) 0:00:08.476 ********** 2025-05-17 00:50:40.912136 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:50:40.912147 | orchestrator | 2025-05-17 00:50:40.912158 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-17 00:50:40.912168 | orchestrator | Saturday 17 May 2025 00:48:42 +0000 (0:00:00.762) 0:00:09.239 ********** 2025-05-17 00:50:40.912179 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:50:40.912190 | orchestrator | 2025-05-17 00:50:40.912201 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-17 00:50:40.912211 | orchestrator | Saturday 17 May 2025 00:48:42 +0000 (0:00:00.317) 0:00:09.557 ********** 2025-05-17 00:50:40.912222 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:50:40.912233 | orchestrator | 2025-05-17 00:50:40.912255 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-17 00:50:40.912266 | orchestrator | Saturday 17 May 2025 00:48:43 +0000 (0:00:00.393) 0:00:09.950 ********** 2025-05-17 00:50:40.912289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-17 00:50:40.912316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-17 00:50:40.912329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-17 00:50:40.912341 | orchestrator | 2025-05-17 00:50:40.912352 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-17 00:50:40.912363 | orchestrator | Saturday 17 May 2025 00:48:43 +0000 (0:00:00.826) 0:00:10.777 ********** 2025-05-17 00:50:40.912385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-17 00:50:40.912411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-17 00:50:40.912424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-17 00:50:40.912435 | orchestrator | 2025-05-17 00:50:40.912446 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-17 00:50:40.912457 | orchestrator | Saturday 17 May 2025 00:48:45 +0000 (0:00:01.386) 0:00:12.163 ********** 2025-05-17 00:50:40.912468 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-17 00:50:40.912479 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-17 00:50:40.912490 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-17 00:50:40.912501 | orchestrator | 2025-05-17 00:50:40.912511 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-17 00:50:40.912522 | orchestrator | Saturday 17 May 2025 00:48:46 +0000 (0:00:01.313) 0:00:13.477 ********** 2025-05-17 00:50:40.912532 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-17 00:50:40.912543 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-17 00:50:40.912553 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-17 00:50:40.912564 | orchestrator | 2025-05-17 00:50:40.912574 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-17 00:50:40.912585 | orchestrator | Saturday 17 May 2025 00:48:48 +0000 (0:00:01.696) 0:00:15.174 ********** 2025-05-17 00:50:40.912650 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-17 00:50:40.912662 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-17 00:50:40.912672 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-17 00:50:40.912683 | orchestrator | 2025-05-17 00:50:40.912700 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-17 00:50:40.912712 | orchestrator | Saturday 17 May 2025 00:48:49 +0000 (0:00:01.621) 0:00:16.796 ********** 2025-05-17 00:50:40.912722 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-17 00:50:40.912733 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-17 00:50:40.912744 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-17 00:50:40.912755 | orchestrator | 2025-05-17 00:50:40.912766 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-17 00:50:40.912777 | orchestrator | Saturday 17 May 2025 00:48:51 +0000 (0:00:01.754) 0:00:18.550 ********** 2025-05-17 00:50:40.912787 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-17 00:50:40.912798 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-17 00:50:40.912809 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-17 00:50:40.912819 | orchestrator | 2025-05-17 00:50:40.912830 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-17 00:50:40.912841 | orchestrator | Saturday 17 May 2025 00:48:53 +0000 (0:00:01.733) 0:00:20.284 ********** 2025-05-17 00:50:40.912852 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-17 00:50:40.912863 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-17 00:50:40.912879 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-17 00:50:40.912890 | orchestrator | 2025-05-17 00:50:40.912902 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-17 00:50:40.912913 | orchestrator | Saturday 17 May 2025 00:48:55 +0000 (0:00:02.033) 0:00:22.317 ********** 2025-05-17 00:50:40.912923 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:50:40.912935 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:50:40.912946 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:50:40.912956 | orchestrator | 2025-05-17 00:50:40.912967 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-17 00:50:40.912978 | orchestrator | Saturday 17 May 2025 00:48:56 +0000 (0:00:01.159) 0:00:23.477 ********** 2025-05-17 00:50:40.912991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-17 00:50:40.913035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-17 00:50:40.913060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-17 00:50:40.913073 | orchestrator | 2025-05-17 00:50:40.913084 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-17 00:50:40.913095 | orchestrator | Saturday 17 May 2025 00:48:58 +0000 (0:00:01.403) 0:00:24.880 ********** 2025-05-17 00:50:40.913106 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:50:40.913117 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:50:40.913127 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:50:40.913138 | orchestrator | 2025-05-17 00:50:40.913149 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-17 00:50:40.913159 | orchestrator | Saturday 17 May 2025 00:48:58 +0000 (0:00:00.842) 0:00:25.723 ********** 2025-05-17 00:50:40.913170 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:50:40.913181 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:50:40.913191 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:50:40.913202 | orchestrator | 2025-05-17 00:50:40.913213 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-17 00:50:40.913223 | orchestrator | Saturday 17 May 2025 00:49:04 +0000 (0:00:05.743) 0:00:31.466 ********** 2025-05-17 00:50:40.913234 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:50:40.913244 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:50:40.913255 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:50:40.913266 | orchestrator | 2025-05-17 00:50:40.913313 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-17 00:50:40.913342 | orchestrator | 2025-05-17 00:50:40.913361 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-17 00:50:40.913379 | orchestrator | Saturday 17 May 2025 00:49:05 +0000 (0:00:00.532) 0:00:31.998 ********** 2025-05-17 00:50:40.913397 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:50:40.913415 | orchestrator | 2025-05-17 00:50:40.913434 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-17 00:50:40.913467 | orchestrator | Saturday 17 May 2025 00:49:06 +0000 (0:00:00.901) 0:00:32.900 ********** 2025-05-17 00:50:40.913484 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:50:40.913500 | orchestrator | 2025-05-17 00:50:40.913511 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-17 00:50:40.913522 | orchestrator | Saturday 17 May 2025 00:49:06 +0000 (0:00:00.217) 0:00:33.118 ********** 2025-05-17 00:50:40.913532 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:50:40.913543 | orchestrator | 2025-05-17 00:50:40.913553 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-17 00:50:40.913564 | orchestrator | Saturday 17 May 2025 00:49:08 +0000 (0:00:02.206) 0:00:35.325 ********** 2025-05-17 00:50:40.913575 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:50:40.913585 | orchestrator | 2025-05-17 00:50:40.913596 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-17 00:50:40.913606 | orchestrator | 2025-05-17 00:50:40.913617 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-17 00:50:40.913627 | orchestrator | Saturday 17 May 2025 00:50:02 +0000 (0:00:54.520) 0:01:29.845 ********** 2025-05-17 00:50:40.913667 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:50:40.913679 | orchestrator | 2025-05-17 00:50:40.913689 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-17 00:50:40.913700 | orchestrator | Saturday 17 May 2025 00:50:03 +0000 (0:00:00.911) 0:01:30.757 ********** 2025-05-17 00:50:40.913711 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:50:40.913722 | orchestrator | 2025-05-17 00:50:40.913732 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-17 00:50:40.913743 | orchestrator | Saturday 17 May 2025 00:50:04 +0000 (0:00:00.382) 0:01:31.139 ********** 2025-05-17 00:50:40.913754 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:50:40.913764 | orchestrator | 2025-05-17 00:50:40.913775 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-17 00:50:40.913785 | orchestrator | Saturday 17 May 2025 00:50:06 +0000 (0:00:01.739) 0:01:32.878 ********** 2025-05-17 00:50:40.913796 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:50:40.913807 | orchestrator | 2025-05-17 00:50:40.913817 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-17 00:50:40.913828 | orchestrator | 2025-05-17 00:50:40.913839 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-17 00:50:40.913850 | orchestrator | Saturday 17 May 2025 00:50:20 +0000 (0:00:14.365) 0:01:47.244 ********** 2025-05-17 00:50:40.913861 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:50:40.913872 | orchestrator | 2025-05-17 00:50:40.913883 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-17 00:50:40.913894 | orchestrator | Saturday 17 May 2025 00:50:20 +0000 (0:00:00.584) 0:01:47.829 ********** 2025-05-17 00:50:40.913905 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:50:40.913916 | orchestrator | 2025-05-17 00:50:40.914892 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-17 00:50:40.914942 | orchestrator | Saturday 17 May 2025 00:50:21 +0000 (0:00:00.197) 0:01:48.026 ********** 2025-05-17 00:50:40.914954 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:50:40.914965 | orchestrator | 2025-05-17 00:50:40.914976 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-17 00:50:40.914986 | orchestrator | Saturday 17 May 2025 00:50:23 +0000 (0:00:01.890) 0:01:49.916 ********** 2025-05-17 00:50:40.914997 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:50:40.915051 | orchestrator | 2025-05-17 00:50:40.915062 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-17 00:50:40.915073 | orchestrator | 2025-05-17 00:50:40.915084 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-17 00:50:40.915095 | orchestrator | Saturday 17 May 2025 00:50:36 +0000 (0:00:13.838) 0:02:03.754 ********** 2025-05-17 00:50:40.915106 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:50:40.915130 | orchestrator | 2025-05-17 00:50:40.915140 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-17 00:50:40.915151 | orchestrator | Saturday 17 May 2025 00:50:37 +0000 (0:00:01.008) 0:02:04.763 ********** 2025-05-17 00:50:40.915162 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-17 00:50:40.915173 | orchestrator | enable_outward_rabbitmq_True 2025-05-17 00:50:40.915184 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-17 00:50:40.915195 | orchestrator | outward_rabbitmq_restart 2025-05-17 00:50:40.915206 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:50:40.915217 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:50:40.915228 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:50:40.915239 | orchestrator | 2025-05-17 00:50:40.915250 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-17 00:50:40.915260 | orchestrator | skipping: no hosts matched 2025-05-17 00:50:40.915271 | orchestrator | 2025-05-17 00:50:40.915282 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-17 00:50:40.915292 | orchestrator | skipping: no hosts matched 2025-05-17 00:50:40.915303 | orchestrator | 2025-05-17 00:50:40.915313 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-17 00:50:40.915324 | orchestrator | skipping: no hosts matched 2025-05-17 00:50:40.915335 | orchestrator | 2025-05-17 00:50:40.915346 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:50:40.915358 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-17 00:50:40.915370 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-17 00:50:40.915381 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:50:40.915392 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 00:50:40.915402 | orchestrator | 2025-05-17 00:50:40.915413 | orchestrator | 2025-05-17 00:50:40.915424 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 00:50:40.915435 | orchestrator | Saturday 17 May 2025 00:50:40 +0000 (0:00:02.239) 0:02:07.002 ********** 2025-05-17 00:50:40.915446 | orchestrator | =============================================================================== 2025-05-17 00:50:40.915465 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 82.72s 2025-05-17 00:50:40.915476 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.84s 2025-05-17 00:50:40.915487 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 5.74s 2025-05-17 00:50:40.915497 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.95s 2025-05-17 00:50:40.915508 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.40s 2025-05-17 00:50:40.915519 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.24s 2025-05-17 00:50:40.915530 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.03s 2025-05-17 00:50:40.915540 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.75s 2025-05-17 00:50:40.915551 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.73s 2025-05-17 00:50:40.915562 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.70s 2025-05-17 00:50:40.915572 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.62s 2025-05-17 00:50:40.915583 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.40s 2025-05-17 00:50:40.915601 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.39s 2025-05-17 00:50:40.915612 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.31s 2025-05-17 00:50:40.915623 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.16s 2025-05-17 00:50:40.915633 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.16s 2025-05-17 00:50:40.915644 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.01s 2025-05-17 00:50:40.915655 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.85s 2025-05-17 00:50:40.915666 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.84s 2025-05-17 00:50:40.915676 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.83s 2025-05-17 00:50:40.915695 | orchestrator | 2025-05-17 00:50:40 | INFO  | Task a82c7811-c655-4e36-a57a-7ce041ad832a is in state SUCCESS 2025-05-17 00:50:40.915707 | orchestrator | 2025-05-17 00:50:40 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:40.915718 | orchestrator | 2025-05-17 00:50:40 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:40.915729 | orchestrator | 2025-05-17 00:50:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:43.959323 | orchestrator | 2025-05-17 00:50:43 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:43.960229 | orchestrator | 2025-05-17 00:50:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:43.960258 | orchestrator | 2025-05-17 00:50:43 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:43.964282 | orchestrator | 2025-05-17 00:50:43 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:43.964312 | orchestrator | 2025-05-17 00:50:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:47.017278 | orchestrator | 2025-05-17 00:50:47 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:47.017668 | orchestrator | 2025-05-17 00:50:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:47.018195 | orchestrator | 2025-05-17 00:50:47 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:47.018762 | orchestrator | 2025-05-17 00:50:47 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:47.018786 | orchestrator | 2025-05-17 00:50:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:50.061716 | orchestrator | 2025-05-17 00:50:50 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:50.062390 | orchestrator | 2025-05-17 00:50:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:50.063445 | orchestrator | 2025-05-17 00:50:50 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:50.063989 | orchestrator | 2025-05-17 00:50:50 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:50.066468 | orchestrator | 2025-05-17 00:50:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:53.096982 | orchestrator | 2025-05-17 00:50:53 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:53.097317 | orchestrator | 2025-05-17 00:50:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:53.098428 | orchestrator | 2025-05-17 00:50:53 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:53.098861 | orchestrator | 2025-05-17 00:50:53 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:53.098923 | orchestrator | 2025-05-17 00:50:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:56.138945 | orchestrator | 2025-05-17 00:50:56 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:56.139682 | orchestrator | 2025-05-17 00:50:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:56.140884 | orchestrator | 2025-05-17 00:50:56 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:56.143472 | orchestrator | 2025-05-17 00:50:56 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:56.143512 | orchestrator | 2025-05-17 00:50:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:50:59.184643 | orchestrator | 2025-05-17 00:50:59 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:50:59.185239 | orchestrator | 2025-05-17 00:50:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:50:59.185606 | orchestrator | 2025-05-17 00:50:59 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:50:59.186646 | orchestrator | 2025-05-17 00:50:59 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:50:59.186696 | orchestrator | 2025-05-17 00:50:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:02.231040 | orchestrator | 2025-05-17 00:51:02 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:02.233267 | orchestrator | 2025-05-17 00:51:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:02.234156 | orchestrator | 2025-05-17 00:51:02 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:02.236513 | orchestrator | 2025-05-17 00:51:02 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:02.236788 | orchestrator | 2025-05-17 00:51:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:05.307774 | orchestrator | 2025-05-17 00:51:05 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:05.309439 | orchestrator | 2025-05-17 00:51:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:05.311955 | orchestrator | 2025-05-17 00:51:05 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:05.313731 | orchestrator | 2025-05-17 00:51:05 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:05.313999 | orchestrator | 2025-05-17 00:51:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:08.358339 | orchestrator | 2025-05-17 00:51:08 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:08.360567 | orchestrator | 2025-05-17 00:51:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:08.363811 | orchestrator | 2025-05-17 00:51:08 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:08.365440 | orchestrator | 2025-05-17 00:51:08 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:08.365465 | orchestrator | 2025-05-17 00:51:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:11.417048 | orchestrator | 2025-05-17 00:51:11 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:11.417604 | orchestrator | 2025-05-17 00:51:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:11.420622 | orchestrator | 2025-05-17 00:51:11 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:11.423510 | orchestrator | 2025-05-17 00:51:11 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:11.423535 | orchestrator | 2025-05-17 00:51:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:14.483731 | orchestrator | 2025-05-17 00:51:14 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:14.484984 | orchestrator | 2025-05-17 00:51:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:14.486960 | orchestrator | 2025-05-17 00:51:14 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:14.488691 | orchestrator | 2025-05-17 00:51:14 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:14.488891 | orchestrator | 2025-05-17 00:51:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:17.536615 | orchestrator | 2025-05-17 00:51:17 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:17.538122 | orchestrator | 2025-05-17 00:51:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:17.539651 | orchestrator | 2025-05-17 00:51:17 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:17.541903 | orchestrator | 2025-05-17 00:51:17 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:17.541938 | orchestrator | 2025-05-17 00:51:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:20.590403 | orchestrator | 2025-05-17 00:51:20 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:20.590735 | orchestrator | 2025-05-17 00:51:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:20.591545 | orchestrator | 2025-05-17 00:51:20 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:20.593412 | orchestrator | 2025-05-17 00:51:20 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:20.593431 | orchestrator | 2025-05-17 00:51:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:23.633323 | orchestrator | 2025-05-17 00:51:23 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:23.633987 | orchestrator | 2025-05-17 00:51:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:23.634763 | orchestrator | 2025-05-17 00:51:23 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:23.636257 | orchestrator | 2025-05-17 00:51:23 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:23.636327 | orchestrator | 2025-05-17 00:51:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:26.681391 | orchestrator | 2025-05-17 00:51:26 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:26.683044 | orchestrator | 2025-05-17 00:51:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:26.683495 | orchestrator | 2025-05-17 00:51:26 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:26.685851 | orchestrator | 2025-05-17 00:51:26 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:26.686922 | orchestrator | 2025-05-17 00:51:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:29.735899 | orchestrator | 2025-05-17 00:51:29 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:29.736105 | orchestrator | 2025-05-17 00:51:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:29.737907 | orchestrator | 2025-05-17 00:51:29 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:29.739140 | orchestrator | 2025-05-17 00:51:29 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:29.742629 | orchestrator | 2025-05-17 00:51:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:32.793798 | orchestrator | 2025-05-17 00:51:32 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:32.795976 | orchestrator | 2025-05-17 00:51:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:32.797769 | orchestrator | 2025-05-17 00:51:32 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:32.803064 | orchestrator | 2025-05-17 00:51:32 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:32.803180 | orchestrator | 2025-05-17 00:51:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:35.848313 | orchestrator | 2025-05-17 00:51:35 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:35.850911 | orchestrator | 2025-05-17 00:51:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:35.850965 | orchestrator | 2025-05-17 00:51:35 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:35.850979 | orchestrator | 2025-05-17 00:51:35 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:35.851041 | orchestrator | 2025-05-17 00:51:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:38.909476 | orchestrator | 2025-05-17 00:51:38 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:38.909976 | orchestrator | 2025-05-17 00:51:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:38.911032 | orchestrator | 2025-05-17 00:51:38 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:38.911888 | orchestrator | 2025-05-17 00:51:38 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:38.911918 | orchestrator | 2025-05-17 00:51:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:41.964188 | orchestrator | 2025-05-17 00:51:41 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:41.966451 | orchestrator | 2025-05-17 00:51:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:41.970169 | orchestrator | 2025-05-17 00:51:41 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:41.973420 | orchestrator | 2025-05-17 00:51:41 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:41.973453 | orchestrator | 2025-05-17 00:51:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:45.024191 | orchestrator | 2025-05-17 00:51:45 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:45.026417 | orchestrator | 2025-05-17 00:51:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:45.028362 | orchestrator | 2025-05-17 00:51:45 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:45.030224 | orchestrator | 2025-05-17 00:51:45 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:45.030312 | orchestrator | 2025-05-17 00:51:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:48.081149 | orchestrator | 2025-05-17 00:51:48 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:48.082811 | orchestrator | 2025-05-17 00:51:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:48.083219 | orchestrator | 2025-05-17 00:51:48 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:48.085251 | orchestrator | 2025-05-17 00:51:48 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:48.085279 | orchestrator | 2025-05-17 00:51:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:51.125610 | orchestrator | 2025-05-17 00:51:51 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:51.125707 | orchestrator | 2025-05-17 00:51:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:51.132420 | orchestrator | 2025-05-17 00:51:51 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:51.135418 | orchestrator | 2025-05-17 00:51:51 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:51.135505 | orchestrator | 2025-05-17 00:51:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:54.176908 | orchestrator | 2025-05-17 00:51:54 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:54.177127 | orchestrator | 2025-05-17 00:51:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:54.177229 | orchestrator | 2025-05-17 00:51:54 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:54.181409 | orchestrator | 2025-05-17 00:51:54 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:54.181491 | orchestrator | 2025-05-17 00:51:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:51:57.212267 | orchestrator | 2025-05-17 00:51:57 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:51:57.212443 | orchestrator | 2025-05-17 00:51:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:51:57.212826 | orchestrator | 2025-05-17 00:51:57 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state STARTED 2025-05-17 00:51:57.213289 | orchestrator | 2025-05-17 00:51:57 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:51:57.213334 | orchestrator | 2025-05-17 00:51:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:00.246622 | orchestrator | 2025-05-17 00:52:00 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:00.247257 | orchestrator | 2025-05-17 00:52:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:00.248300 | orchestrator | 2025-05-17 00:52:00 | INFO  | Task 8cd9a168-c663-4123-ac10-879c7b483145 is in state SUCCESS 2025-05-17 00:52:00.250715 | orchestrator | 2025-05-17 00:52:00.250764 | orchestrator | 2025-05-17 00:52:00.250778 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 00:52:00.250791 | orchestrator | 2025-05-17 00:52:00.250802 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 00:52:00.250813 | orchestrator | Saturday 17 May 2025 00:49:31 +0000 (0:00:00.211) 0:00:00.211 ********** 2025-05-17 00:52:00.250825 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:52:00.250837 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:52:00.250847 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:52:00.250879 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.250889 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.250900 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.250911 | orchestrator | 2025-05-17 00:52:00.250922 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 00:52:00.250932 | orchestrator | Saturday 17 May 2025 00:49:32 +0000 (0:00:00.810) 0:00:01.021 ********** 2025-05-17 00:52:00.250943 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-17 00:52:00.250954 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-17 00:52:00.250965 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-17 00:52:00.250976 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-17 00:52:00.250986 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-17 00:52:00.250996 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-17 00:52:00.251033 | orchestrator | 2025-05-17 00:52:00.251044 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-17 00:52:00.251054 | orchestrator | 2025-05-17 00:52:00.251064 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-17 00:52:00.251075 | orchestrator | Saturday 17 May 2025 00:49:33 +0000 (0:00:01.438) 0:00:02.460 ********** 2025-05-17 00:52:00.252438 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:52:00.252486 | orchestrator | 2025-05-17 00:52:00.252499 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-17 00:52:00.252512 | orchestrator | Saturday 17 May 2025 00:49:35 +0000 (0:00:01.333) 0:00:03.793 ********** 2025-05-17 00:52:00.252524 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252560 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252652 | orchestrator | 2025-05-17 00:52:00.252662 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-17 00:52:00.252672 | orchestrator | Saturday 17 May 2025 00:49:36 +0000 (0:00:01.249) 0:00:05.043 ********** 2025-05-17 00:52:00.252681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252691 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252701 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252711 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252743 | orchestrator | 2025-05-17 00:52:00.252753 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-17 00:52:00.252763 | orchestrator | Saturday 17 May 2025 00:49:39 +0000 (0:00:02.482) 0:00:07.525 ********** 2025-05-17 00:52:00.252774 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252795 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252817 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252859 | orchestrator | 2025-05-17 00:52:00.252870 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-17 00:52:00.252880 | orchestrator | Saturday 17 May 2025 00:49:40 +0000 (0:00:01.286) 0:00:08.812 ********** 2025-05-17 00:52:00.252891 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252901 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.252972 | orchestrator | 2025-05-17 00:52:00.252982 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-17 00:52:00.252992 | orchestrator | Saturday 17 May 2025 00:49:42 +0000 (0:00:02.542) 0:00:11.355 ********** 2025-05-17 00:52:00.253025 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.253037 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.253047 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.253057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.253067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.253086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.253097 | orchestrator | 2025-05-17 00:52:00.253108 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-17 00:52:00.253119 | orchestrator | Saturday 17 May 2025 00:49:44 +0000 (0:00:01.532) 0:00:12.887 ********** 2025-05-17 00:52:00.253129 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:52:00.253141 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:52:00.253151 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:52:00.253161 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:52:00.253171 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:52:00.253181 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:52:00.253191 | orchestrator | 2025-05-17 00:52:00.253201 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-17 00:52:00.253215 | orchestrator | Saturday 17 May 2025 00:49:48 +0000 (0:00:03.732) 0:00:16.619 ********** 2025-05-17 00:52:00.253226 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-17 00:52:00.253236 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-17 00:52:00.253246 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-17 00:52:00.253263 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-17 00:52:00.253273 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-17 00:52:00.253283 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-17 00:52:00.253293 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-17 00:52:00.253303 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-17 00:52:00.253314 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-17 00:52:00.253324 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-17 00:52:00.253334 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-17 00:52:00.253344 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-17 00:52:00.253354 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-17 00:52:00.253366 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-17 00:52:00.253376 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-17 00:52:00.253386 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-17 00:52:00.253397 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-17 00:52:00.253407 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-17 00:52:00.253425 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-17 00:52:00.253436 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-17 00:52:00.253447 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-17 00:52:00.253457 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-17 00:52:00.253467 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-17 00:52:00.253477 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-17 00:52:00.253487 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-17 00:52:00.253498 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-17 00:52:00.253508 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-17 00:52:00.253518 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-17 00:52:00.253528 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-17 00:52:00.253539 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-17 00:52:00.253549 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-17 00:52:00.253560 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-17 00:52:00.253570 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-17 00:52:00.253581 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-17 00:52:00.253591 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-17 00:52:00.253601 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-17 00:52:00.253612 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-17 00:52:00.253627 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-17 00:52:00.253638 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-17 00:52:00.253649 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-17 00:52:00.253665 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-17 00:52:00.253675 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-17 00:52:00.253686 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-17 00:52:00.253697 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-17 00:52:00.253707 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-17 00:52:00.253718 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-17 00:52:00.253729 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-17 00:52:00.253739 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-17 00:52:00.253756 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-17 00:52:00.253767 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-17 00:52:00.253777 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-17 00:52:00.253788 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-17 00:52:00.253798 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-17 00:52:00.253808 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-17 00:52:00.253818 | orchestrator | 2025-05-17 00:52:00.253829 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-17 00:52:00.253840 | orchestrator | Saturday 17 May 2025 00:50:06 +0000 (0:00:18.684) 0:00:35.304 ********** 2025-05-17 00:52:00.253851 | orchestrator | 2025-05-17 00:52:00.253861 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-17 00:52:00.253871 | orchestrator | Saturday 17 May 2025 00:50:06 +0000 (0:00:00.061) 0:00:35.365 ********** 2025-05-17 00:52:00.253882 | orchestrator | 2025-05-17 00:52:00.253893 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-17 00:52:00.253903 | orchestrator | Saturday 17 May 2025 00:50:07 +0000 (0:00:00.217) 0:00:35.582 ********** 2025-05-17 00:52:00.253913 | orchestrator | 2025-05-17 00:52:00.253923 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-17 00:52:00.253934 | orchestrator | Saturday 17 May 2025 00:50:07 +0000 (0:00:00.055) 0:00:35.638 ********** 2025-05-17 00:52:00.253944 | orchestrator | 2025-05-17 00:52:00.253954 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-17 00:52:00.253964 | orchestrator | Saturday 17 May 2025 00:50:07 +0000 (0:00:00.052) 0:00:35.691 ********** 2025-05-17 00:52:00.253974 | orchestrator | 2025-05-17 00:52:00.253984 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-17 00:52:00.253994 | orchestrator | Saturday 17 May 2025 00:50:07 +0000 (0:00:00.054) 0:00:35.746 ********** 2025-05-17 00:52:00.254077 | orchestrator | 2025-05-17 00:52:00.254092 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-17 00:52:00.254103 | orchestrator | Saturday 17 May 2025 00:50:07 +0000 (0:00:00.052) 0:00:35.798 ********** 2025-05-17 00:52:00.254115 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:52:00.254136 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:52:00.254147 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:52:00.254158 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.254168 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.254178 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.254187 | orchestrator | 2025-05-17 00:52:00.254197 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-17 00:52:00.254208 | orchestrator | Saturday 17 May 2025 00:50:09 +0000 (0:00:02.479) 0:00:38.278 ********** 2025-05-17 00:52:00.254218 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:52:00.254229 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:52:00.254239 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:52:00.254249 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:52:00.254260 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:52:00.254271 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:52:00.254282 | orchestrator | 2025-05-17 00:52:00.254292 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-17 00:52:00.254302 | orchestrator | 2025-05-17 00:52:00.254313 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-17 00:52:00.254359 | orchestrator | Saturday 17 May 2025 00:50:35 +0000 (0:00:26.019) 0:01:04.298 ********** 2025-05-17 00:52:00.254378 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:52:00.254388 | orchestrator | 2025-05-17 00:52:00.254398 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-17 00:52:00.254408 | orchestrator | Saturday 17 May 2025 00:50:36 +0000 (0:00:00.878) 0:01:05.176 ********** 2025-05-17 00:52:00.254419 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:52:00.254429 | orchestrator | 2025-05-17 00:52:00.254451 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-17 00:52:00.254463 | orchestrator | Saturday 17 May 2025 00:50:37 +0000 (0:00:01.278) 0:01:06.454 ********** 2025-05-17 00:52:00.254474 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.254485 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.254497 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.254508 | orchestrator | 2025-05-17 00:52:00.254519 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-17 00:52:00.254529 | orchestrator | Saturday 17 May 2025 00:50:38 +0000 (0:00:00.889) 0:01:07.344 ********** 2025-05-17 00:52:00.254540 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.254549 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.254560 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.254569 | orchestrator | 2025-05-17 00:52:00.254579 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-17 00:52:00.254590 | orchestrator | Saturday 17 May 2025 00:50:39 +0000 (0:00:00.262) 0:01:07.606 ********** 2025-05-17 00:52:00.254601 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.254611 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.254621 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.254631 | orchestrator | 2025-05-17 00:52:00.254641 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-17 00:52:00.254651 | orchestrator | Saturday 17 May 2025 00:50:39 +0000 (0:00:00.421) 0:01:08.028 ********** 2025-05-17 00:52:00.254661 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.254672 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.254682 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.254692 | orchestrator | 2025-05-17 00:52:00.254731 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-17 00:52:00.254742 | orchestrator | Saturday 17 May 2025 00:50:39 +0000 (0:00:00.401) 0:01:08.429 ********** 2025-05-17 00:52:00.254752 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.254762 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.254772 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.254782 | orchestrator | 2025-05-17 00:52:00.254792 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-17 00:52:00.254802 | orchestrator | Saturday 17 May 2025 00:50:40 +0000 (0:00:00.289) 0:01:08.718 ********** 2025-05-17 00:52:00.254813 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.254823 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.254834 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.254844 | orchestrator | 2025-05-17 00:52:00.254855 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-17 00:52:00.254865 | orchestrator | Saturday 17 May 2025 00:50:40 +0000 (0:00:00.418) 0:01:09.137 ********** 2025-05-17 00:52:00.254876 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.254886 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.254896 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.254906 | orchestrator | 2025-05-17 00:52:00.254916 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-17 00:52:00.254926 | orchestrator | Saturday 17 May 2025 00:50:40 +0000 (0:00:00.312) 0:01:09.449 ********** 2025-05-17 00:52:00.254936 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.254946 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.254970 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.254980 | orchestrator | 2025-05-17 00:52:00.254991 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-17 00:52:00.255001 | orchestrator | Saturday 17 May 2025 00:50:41 +0000 (0:00:00.330) 0:01:09.780 ********** 2025-05-17 00:52:00.255035 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.255046 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.255056 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.255066 | orchestrator | 2025-05-17 00:52:00.255076 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-17 00:52:00.255085 | orchestrator | Saturday 17 May 2025 00:50:41 +0000 (0:00:00.238) 0:01:10.019 ********** 2025-05-17 00:52:00.255096 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.255106 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.255116 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.255126 | orchestrator | 2025-05-17 00:52:00.255136 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-17 00:52:00.255146 | orchestrator | Saturday 17 May 2025 00:50:41 +0000 (0:00:00.329) 0:01:10.348 ********** 2025-05-17 00:52:00.255156 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.255165 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.255175 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.255186 | orchestrator | 2025-05-17 00:52:00.255197 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-17 00:52:00.255208 | orchestrator | Saturday 17 May 2025 00:50:42 +0000 (0:00:00.321) 0:01:10.670 ********** 2025-05-17 00:52:00.255218 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.255229 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.255239 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.255250 | orchestrator | 2025-05-17 00:52:00.255261 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-17 00:52:00.255271 | orchestrator | Saturday 17 May 2025 00:50:42 +0000 (0:00:00.310) 0:01:10.981 ********** 2025-05-17 00:52:00.255281 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.255292 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.255301 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.255311 | orchestrator | 2025-05-17 00:52:00.255321 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-17 00:52:00.255330 | orchestrator | Saturday 17 May 2025 00:50:42 +0000 (0:00:00.270) 0:01:11.251 ********** 2025-05-17 00:52:00.255339 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.255355 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.255365 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.255374 | orchestrator | 2025-05-17 00:52:00.255384 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-17 00:52:00.255393 | orchestrator | Saturday 17 May 2025 00:50:43 +0000 (0:00:00.476) 0:01:11.728 ********** 2025-05-17 00:52:00.255403 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.255412 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.255422 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.255432 | orchestrator | 2025-05-17 00:52:00.255454 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-17 00:52:00.255464 | orchestrator | Saturday 17 May 2025 00:50:43 +0000 (0:00:00.412) 0:01:12.141 ********** 2025-05-17 00:52:00.255474 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.255483 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.255493 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.255503 | orchestrator | 2025-05-17 00:52:00.255513 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-17 00:52:00.255522 | orchestrator | Saturday 17 May 2025 00:50:43 +0000 (0:00:00.292) 0:01:12.433 ********** 2025-05-17 00:52:00.255531 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.255541 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.255551 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.255570 | orchestrator | 2025-05-17 00:52:00.255581 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-17 00:52:00.255591 | orchestrator | Saturday 17 May 2025 00:50:44 +0000 (0:00:00.517) 0:01:12.950 ********** 2025-05-17 00:52:00.255601 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:52:00.255612 | orchestrator | 2025-05-17 00:52:00.255623 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-17 00:52:00.255633 | orchestrator | Saturday 17 May 2025 00:50:45 +0000 (0:00:00.908) 0:01:13.859 ********** 2025-05-17 00:52:00.255644 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.255655 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.255665 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.255676 | orchestrator | 2025-05-17 00:52:00.255687 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-17 00:52:00.255697 | orchestrator | Saturday 17 May 2025 00:50:46 +0000 (0:00:00.708) 0:01:14.567 ********** 2025-05-17 00:52:00.255708 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.255719 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.255730 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.255741 | orchestrator | 2025-05-17 00:52:00.255751 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-17 00:52:00.255761 | orchestrator | Saturday 17 May 2025 00:50:46 +0000 (0:00:00.905) 0:01:15.473 ********** 2025-05-17 00:52:00.255773 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.255783 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.255794 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.255805 | orchestrator | 2025-05-17 00:52:00.255816 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-17 00:52:00.255827 | orchestrator | Saturday 17 May 2025 00:50:47 +0000 (0:00:00.681) 0:01:16.154 ********** 2025-05-17 00:52:00.255837 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.255847 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.255856 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.255866 | orchestrator | 2025-05-17 00:52:00.255876 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-17 00:52:00.255886 | orchestrator | Saturday 17 May 2025 00:50:48 +0000 (0:00:00.443) 0:01:16.597 ********** 2025-05-17 00:52:00.255896 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.255907 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.255917 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.255927 | orchestrator | 2025-05-17 00:52:00.255937 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-17 00:52:00.255948 | orchestrator | Saturday 17 May 2025 00:50:48 +0000 (0:00:00.320) 0:01:16.918 ********** 2025-05-17 00:52:00.255958 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.255968 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.255978 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.255988 | orchestrator | 2025-05-17 00:52:00.255998 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-17 00:52:00.256070 | orchestrator | Saturday 17 May 2025 00:50:48 +0000 (0:00:00.487) 0:01:17.405 ********** 2025-05-17 00:52:00.256082 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.256093 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.256103 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.256114 | orchestrator | 2025-05-17 00:52:00.256125 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-17 00:52:00.256136 | orchestrator | Saturday 17 May 2025 00:50:49 +0000 (0:00:00.467) 0:01:17.873 ********** 2025-05-17 00:52:00.256146 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.256157 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.256168 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.256178 | orchestrator | 2025-05-17 00:52:00.256189 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-17 00:52:00.256209 | orchestrator | Saturday 17 May 2025 00:50:49 +0000 (0:00:00.502) 0:01:18.376 ********** 2025-05-17 00:52:00.256221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256413 | orchestrator | 2025-05-17 00:52:00.256424 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-17 00:52:00.256433 | orchestrator | Saturday 17 May 2025 00:50:51 +0000 (0:00:01.514) 0:01:19.890 ********** 2025-05-17 00:52:00.256444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256560 | orchestrator | 2025-05-17 00:52:00.256570 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-17 00:52:00.256579 | orchestrator | Saturday 17 May 2025 00:50:55 +0000 (0:00:04.049) 0:01:23.939 ********** 2025-05-17 00:52:00.256588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.256700 | orchestrator | 2025-05-17 00:52:00.256710 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-17 00:52:00.256720 | orchestrator | Saturday 17 May 2025 00:50:57 +0000 (0:00:02.344) 0:01:26.284 ********** 2025-05-17 00:52:00.256729 | orchestrator | 2025-05-17 00:52:00.256738 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-17 00:52:00.256747 | orchestrator | Saturday 17 May 2025 00:50:57 +0000 (0:00:00.055) 0:01:26.339 ********** 2025-05-17 00:52:00.256757 | orchestrator | 2025-05-17 00:52:00.256766 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-17 00:52:00.256776 | orchestrator | Saturday 17 May 2025 00:50:57 +0000 (0:00:00.053) 0:01:26.393 ********** 2025-05-17 00:52:00.256787 | orchestrator | 2025-05-17 00:52:00.256797 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-17 00:52:00.256806 | orchestrator | Saturday 17 May 2025 00:50:57 +0000 (0:00:00.053) 0:01:26.446 ********** 2025-05-17 00:52:00.256816 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:52:00.256827 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:52:00.256837 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:52:00.256847 | orchestrator | 2025-05-17 00:52:00.256857 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-17 00:52:00.256867 | orchestrator | Saturday 17 May 2025 00:51:05 +0000 (0:00:07.939) 0:01:34.386 ********** 2025-05-17 00:52:00.256877 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:52:00.256886 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:52:00.256895 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:52:00.256904 | orchestrator | 2025-05-17 00:52:00.256913 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-17 00:52:00.256923 | orchestrator | Saturday 17 May 2025 00:51:13 +0000 (0:00:07.811) 0:01:42.198 ********** 2025-05-17 00:52:00.256932 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:52:00.256941 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:52:00.256951 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:52:00.256960 | orchestrator | 2025-05-17 00:52:00.256969 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-17 00:52:00.256979 | orchestrator | Saturday 17 May 2025 00:51:16 +0000 (0:00:02.740) 0:01:44.938 ********** 2025-05-17 00:52:00.256993 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.257028 | orchestrator | 2025-05-17 00:52:00.257039 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-17 00:52:00.257049 | orchestrator | Saturday 17 May 2025 00:51:16 +0000 (0:00:00.113) 0:01:45.051 ********** 2025-05-17 00:52:00.257058 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.257069 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.257079 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.257089 | orchestrator | 2025-05-17 00:52:00.257108 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-17 00:52:00.257118 | orchestrator | Saturday 17 May 2025 00:51:17 +0000 (0:00:00.886) 0:01:45.937 ********** 2025-05-17 00:52:00.257127 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.257137 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.257146 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:52:00.257155 | orchestrator | 2025-05-17 00:52:00.257164 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-17 00:52:00.257174 | orchestrator | Saturday 17 May 2025 00:51:18 +0000 (0:00:00.590) 0:01:46.528 ********** 2025-05-17 00:52:00.257183 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.257192 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.257202 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.257211 | orchestrator | 2025-05-17 00:52:00.257221 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-17 00:52:00.257238 | orchestrator | Saturday 17 May 2025 00:51:18 +0000 (0:00:00.902) 0:01:47.430 ********** 2025-05-17 00:52:00.257248 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.257257 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.257267 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:52:00.257277 | orchestrator | 2025-05-17 00:52:00.257287 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-17 00:52:00.257297 | orchestrator | Saturday 17 May 2025 00:51:19 +0000 (0:00:00.594) 0:01:48.025 ********** 2025-05-17 00:52:00.257307 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.257317 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.257327 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.257337 | orchestrator | 2025-05-17 00:52:00.257348 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-17 00:52:00.257358 | orchestrator | Saturday 17 May 2025 00:51:20 +0000 (0:00:00.968) 0:01:48.994 ********** 2025-05-17 00:52:00.257367 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.257378 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.257388 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.257398 | orchestrator | 2025-05-17 00:52:00.257408 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-17 00:52:00.257418 | orchestrator | Saturday 17 May 2025 00:51:21 +0000 (0:00:00.712) 0:01:49.706 ********** 2025-05-17 00:52:00.257428 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.257438 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.257448 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.257458 | orchestrator | 2025-05-17 00:52:00.257468 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-17 00:52:00.257478 | orchestrator | Saturday 17 May 2025 00:51:21 +0000 (0:00:00.401) 0:01:50.107 ********** 2025-05-17 00:52:00.257489 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257501 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257511 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257521 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257531 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257547 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257573 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257583 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257593 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257603 | orchestrator | 2025-05-17 00:52:00.257613 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-17 00:52:00.257622 | orchestrator | Saturday 17 May 2025 00:51:23 +0000 (0:00:01.638) 0:01:51.746 ********** 2025-05-17 00:52:00.257632 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257642 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257652 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257662 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257738 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257771 | orchestrator | 2025-05-17 00:52:00.257781 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-17 00:52:00.257791 | orchestrator | Saturday 17 May 2025 00:51:27 +0000 (0:00:04.514) 0:01:56.260 ********** 2025-05-17 00:52:00.257801 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257810 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257820 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257830 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257841 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257856 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257871 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257888 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257899 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 00:52:00.257908 | orchestrator | 2025-05-17 00:52:00.257918 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-17 00:52:00.257928 | orchestrator | Saturday 17 May 2025 00:51:30 +0000 (0:00:02.953) 0:01:59.214 ********** 2025-05-17 00:52:00.257937 | orchestrator | 2025-05-17 00:52:00.257947 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-17 00:52:00.257956 | orchestrator | Saturday 17 May 2025 00:51:30 +0000 (0:00:00.064) 0:01:59.279 ********** 2025-05-17 00:52:00.257966 | orchestrator | 2025-05-17 00:52:00.257976 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-17 00:52:00.257985 | orchestrator | Saturday 17 May 2025 00:51:31 +0000 (0:00:00.228) 0:01:59.507 ********** 2025-05-17 00:52:00.257995 | orchestrator | 2025-05-17 00:52:00.258096 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-17 00:52:00.258110 | orchestrator | Saturday 17 May 2025 00:51:31 +0000 (0:00:00.059) 0:01:59.566 ********** 2025-05-17 00:52:00.258119 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:52:00.258129 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:52:00.258137 | orchestrator | 2025-05-17 00:52:00.258146 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-17 00:52:00.258154 | orchestrator | Saturday 17 May 2025 00:51:37 +0000 (0:00:06.166) 0:02:05.733 ********** 2025-05-17 00:52:00.258162 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:52:00.258171 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:52:00.258180 | orchestrator | 2025-05-17 00:52:00.258190 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-17 00:52:00.258199 | orchestrator | Saturday 17 May 2025 00:51:44 +0000 (0:00:07.051) 0:02:12.784 ********** 2025-05-17 00:52:00.258209 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:52:00.258218 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:52:00.258227 | orchestrator | 2025-05-17 00:52:00.258237 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-17 00:52:00.258247 | orchestrator | Saturday 17 May 2025 00:51:50 +0000 (0:00:06.264) 0:02:19.049 ********** 2025-05-17 00:52:00.258256 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:52:00.258265 | orchestrator | 2025-05-17 00:52:00.258275 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-17 00:52:00.258298 | orchestrator | Saturday 17 May 2025 00:51:50 +0000 (0:00:00.273) 0:02:19.322 ********** 2025-05-17 00:52:00.258307 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.258316 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.258325 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.258334 | orchestrator | 2025-05-17 00:52:00.258343 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-17 00:52:00.258352 | orchestrator | Saturday 17 May 2025 00:51:51 +0000 (0:00:00.817) 0:02:20.140 ********** 2025-05-17 00:52:00.258360 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:52:00.258369 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.258377 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.258386 | orchestrator | 2025-05-17 00:52:00.258395 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-17 00:52:00.258404 | orchestrator | Saturday 17 May 2025 00:51:52 +0000 (0:00:00.999) 0:02:21.139 ********** 2025-05-17 00:52:00.258412 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.258421 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.258430 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.258438 | orchestrator | 2025-05-17 00:52:00.258447 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-17 00:52:00.258456 | orchestrator | Saturday 17 May 2025 00:51:53 +0000 (0:00:01.196) 0:02:22.336 ********** 2025-05-17 00:52:00.258464 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:52:00.258473 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:52:00.258481 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:52:00.258489 | orchestrator | 2025-05-17 00:52:00.258498 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-17 00:52:00.258507 | orchestrator | Saturday 17 May 2025 00:51:54 +0000 (0:00:00.829) 0:02:23.165 ********** 2025-05-17 00:52:00.258515 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.258524 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.258533 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.258542 | orchestrator | 2025-05-17 00:52:00.258551 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-17 00:52:00.258560 | orchestrator | Saturday 17 May 2025 00:51:55 +0000 (0:00:00.947) 0:02:24.113 ********** 2025-05-17 00:52:00.258569 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:52:00.258578 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:52:00.258586 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:52:00.258594 | orchestrator | 2025-05-17 00:52:00.258602 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:52:00.258618 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-17 00:52:00.258628 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-17 00:52:00.258648 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-17 00:52:00.258658 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:52:00.258667 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:52:00.258676 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 00:52:00.258685 | orchestrator | 2025-05-17 00:52:00.258694 | orchestrator | 2025-05-17 00:52:00.258702 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 00:52:00.258711 | orchestrator | Saturday 17 May 2025 00:51:57 +0000 (0:00:01.426) 0:02:25.539 ********** 2025-05-17 00:52:00.258720 | orchestrator | =============================================================================== 2025-05-17 00:52:00.258735 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 26.02s 2025-05-17 00:52:00.258744 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.68s 2025-05-17 00:52:00.258753 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.86s 2025-05-17 00:52:00.258763 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.11s 2025-05-17 00:52:00.258771 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.01s 2025-05-17 00:52:00.258780 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.51s 2025-05-17 00:52:00.258788 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.05s 2025-05-17 00:52:00.258797 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.73s 2025-05-17 00:52:00.258805 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.95s 2025-05-17 00:52:00.258814 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.54s 2025-05-17 00:52:00.258822 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.48s 2025-05-17 00:52:00.258830 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.48s 2025-05-17 00:52:00.258838 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.34s 2025-05-17 00:52:00.258846 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.64s 2025-05-17 00:52:00.258854 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.53s 2025-05-17 00:52:00.258863 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.51s 2025-05-17 00:52:00.258871 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.44s 2025-05-17 00:52:00.258879 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.43s 2025-05-17 00:52:00.258888 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.33s 2025-05-17 00:52:00.258896 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.29s 2025-05-17 00:52:00.258905 | orchestrator | 2025-05-17 00:52:00 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:00.258914 | orchestrator | 2025-05-17 00:52:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:03.306384 | orchestrator | 2025-05-17 00:52:03 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:03.306807 | orchestrator | 2025-05-17 00:52:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:03.307468 | orchestrator | 2025-05-17 00:52:03 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:03.307549 | orchestrator | 2025-05-17 00:52:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:06.370812 | orchestrator | 2025-05-17 00:52:06 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:06.370926 | orchestrator | 2025-05-17 00:52:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:06.375063 | orchestrator | 2025-05-17 00:52:06 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:06.375095 | orchestrator | 2025-05-17 00:52:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:09.430294 | orchestrator | 2025-05-17 00:52:09 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:09.431652 | orchestrator | 2025-05-17 00:52:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:09.433129 | orchestrator | 2025-05-17 00:52:09 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:09.433176 | orchestrator | 2025-05-17 00:52:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:12.487429 | orchestrator | 2025-05-17 00:52:12 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:12.487866 | orchestrator | 2025-05-17 00:52:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:12.488829 | orchestrator | 2025-05-17 00:52:12 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:12.488852 | orchestrator | 2025-05-17 00:52:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:15.535678 | orchestrator | 2025-05-17 00:52:15 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:15.538557 | orchestrator | 2025-05-17 00:52:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:15.539960 | orchestrator | 2025-05-17 00:52:15 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:15.540278 | orchestrator | 2025-05-17 00:52:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:18.582542 | orchestrator | 2025-05-17 00:52:18 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:18.584102 | orchestrator | 2025-05-17 00:52:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:18.584619 | orchestrator | 2025-05-17 00:52:18 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:18.585857 | orchestrator | 2025-05-17 00:52:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:21.629556 | orchestrator | 2025-05-17 00:52:21 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:21.633363 | orchestrator | 2025-05-17 00:52:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:21.635067 | orchestrator | 2025-05-17 00:52:21 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:21.635097 | orchestrator | 2025-05-17 00:52:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:24.696480 | orchestrator | 2025-05-17 00:52:24 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:24.696601 | orchestrator | 2025-05-17 00:52:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:24.697151 | orchestrator | 2025-05-17 00:52:24 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:24.697412 | orchestrator | 2025-05-17 00:52:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:27.752931 | orchestrator | 2025-05-17 00:52:27 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:27.754745 | orchestrator | 2025-05-17 00:52:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:27.756632 | orchestrator | 2025-05-17 00:52:27 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:27.756668 | orchestrator | 2025-05-17 00:52:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:30.813944 | orchestrator | 2025-05-17 00:52:30 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:30.815868 | orchestrator | 2025-05-17 00:52:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:30.818280 | orchestrator | 2025-05-17 00:52:30 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:30.818323 | orchestrator | 2025-05-17 00:52:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:33.867081 | orchestrator | 2025-05-17 00:52:33 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:33.867273 | orchestrator | 2025-05-17 00:52:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:33.869757 | orchestrator | 2025-05-17 00:52:33 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:33.869791 | orchestrator | 2025-05-17 00:52:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:36.923928 | orchestrator | 2025-05-17 00:52:36 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:36.925834 | orchestrator | 2025-05-17 00:52:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:36.928958 | orchestrator | 2025-05-17 00:52:36 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:36.928986 | orchestrator | 2025-05-17 00:52:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:39.992571 | orchestrator | 2025-05-17 00:52:39 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:39.993514 | orchestrator | 2025-05-17 00:52:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:39.995122 | orchestrator | 2025-05-17 00:52:39 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:39.995151 | orchestrator | 2025-05-17 00:52:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:43.047185 | orchestrator | 2025-05-17 00:52:43 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:43.047360 | orchestrator | 2025-05-17 00:52:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:43.048446 | orchestrator | 2025-05-17 00:52:43 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:43.048466 | orchestrator | 2025-05-17 00:52:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:46.109087 | orchestrator | 2025-05-17 00:52:46 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:46.109800 | orchestrator | 2025-05-17 00:52:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:46.110829 | orchestrator | 2025-05-17 00:52:46 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:46.110863 | orchestrator | 2025-05-17 00:52:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:49.173324 | orchestrator | 2025-05-17 00:52:49 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:49.174857 | orchestrator | 2025-05-17 00:52:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:49.176648 | orchestrator | 2025-05-17 00:52:49 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:49.176690 | orchestrator | 2025-05-17 00:52:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:52.242847 | orchestrator | 2025-05-17 00:52:52 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:52.242949 | orchestrator | 2025-05-17 00:52:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:52.243754 | orchestrator | 2025-05-17 00:52:52 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:52.243847 | orchestrator | 2025-05-17 00:52:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:55.288749 | orchestrator | 2025-05-17 00:52:55 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:55.289103 | orchestrator | 2025-05-17 00:52:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:55.289440 | orchestrator | 2025-05-17 00:52:55 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:55.289468 | orchestrator | 2025-05-17 00:52:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:52:58.331181 | orchestrator | 2025-05-17 00:52:58 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:52:58.331850 | orchestrator | 2025-05-17 00:52:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:52:58.333553 | orchestrator | 2025-05-17 00:52:58 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:52:58.333594 | orchestrator | 2025-05-17 00:52:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:01.381869 | orchestrator | 2025-05-17 00:53:01 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:01.386131 | orchestrator | 2025-05-17 00:53:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:01.389151 | orchestrator | 2025-05-17 00:53:01 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:01.389186 | orchestrator | 2025-05-17 00:53:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:04.425683 | orchestrator | 2025-05-17 00:53:04 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:04.426291 | orchestrator | 2025-05-17 00:53:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:04.426958 | orchestrator | 2025-05-17 00:53:04 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:04.429585 | orchestrator | 2025-05-17 00:53:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:07.469297 | orchestrator | 2025-05-17 00:53:07 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:07.469582 | orchestrator | 2025-05-17 00:53:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:07.470468 | orchestrator | 2025-05-17 00:53:07 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:07.470506 | orchestrator | 2025-05-17 00:53:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:10.520131 | orchestrator | 2025-05-17 00:53:10 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:10.520502 | orchestrator | 2025-05-17 00:53:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:10.521540 | orchestrator | 2025-05-17 00:53:10 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:10.521617 | orchestrator | 2025-05-17 00:53:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:13.565410 | orchestrator | 2025-05-17 00:53:13 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:13.565810 | orchestrator | 2025-05-17 00:53:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:13.566623 | orchestrator | 2025-05-17 00:53:13 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:13.566654 | orchestrator | 2025-05-17 00:53:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:16.604490 | orchestrator | 2025-05-17 00:53:16 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:16.605266 | orchestrator | 2025-05-17 00:53:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:16.606315 | orchestrator | 2025-05-17 00:53:16 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:16.606344 | orchestrator | 2025-05-17 00:53:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:19.650419 | orchestrator | 2025-05-17 00:53:19 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:19.650672 | orchestrator | 2025-05-17 00:53:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:19.653577 | orchestrator | 2025-05-17 00:53:19 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:19.653609 | orchestrator | 2025-05-17 00:53:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:22.690548 | orchestrator | 2025-05-17 00:53:22 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:22.691963 | orchestrator | 2025-05-17 00:53:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:22.692045 | orchestrator | 2025-05-17 00:53:22 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:22.692061 | orchestrator | 2025-05-17 00:53:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:25.752349 | orchestrator | 2025-05-17 00:53:25 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:25.753261 | orchestrator | 2025-05-17 00:53:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:25.753632 | orchestrator | 2025-05-17 00:53:25 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:25.753663 | orchestrator | 2025-05-17 00:53:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:28.794886 | orchestrator | 2025-05-17 00:53:28 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:28.795515 | orchestrator | 2025-05-17 00:53:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:28.797890 | orchestrator | 2025-05-17 00:53:28 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:28.798377 | orchestrator | 2025-05-17 00:53:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:31.852206 | orchestrator | 2025-05-17 00:53:31 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:31.854940 | orchestrator | 2025-05-17 00:53:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:31.857086 | orchestrator | 2025-05-17 00:53:31 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:31.857445 | orchestrator | 2025-05-17 00:53:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:34.908472 | orchestrator | 2025-05-17 00:53:34 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:34.909389 | orchestrator | 2025-05-17 00:53:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:34.911537 | orchestrator | 2025-05-17 00:53:34 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:34.911914 | orchestrator | 2025-05-17 00:53:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:37.964513 | orchestrator | 2025-05-17 00:53:37 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:37.966330 | orchestrator | 2025-05-17 00:53:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:37.968499 | orchestrator | 2025-05-17 00:53:37 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:37.968896 | orchestrator | 2025-05-17 00:53:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:41.026475 | orchestrator | 2025-05-17 00:53:41 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:41.028310 | orchestrator | 2025-05-17 00:53:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:41.031065 | orchestrator | 2025-05-17 00:53:41 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:41.031805 | orchestrator | 2025-05-17 00:53:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:44.089298 | orchestrator | 2025-05-17 00:53:44 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:44.090521 | orchestrator | 2025-05-17 00:53:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:44.092506 | orchestrator | 2025-05-17 00:53:44 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:44.092692 | orchestrator | 2025-05-17 00:53:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:47.147571 | orchestrator | 2025-05-17 00:53:47 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:47.151973 | orchestrator | 2025-05-17 00:53:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:47.155310 | orchestrator | 2025-05-17 00:53:47 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:47.155346 | orchestrator | 2025-05-17 00:53:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:50.208973 | orchestrator | 2025-05-17 00:53:50 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:50.212478 | orchestrator | 2025-05-17 00:53:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:50.212645 | orchestrator | 2025-05-17 00:53:50 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:50.213018 | orchestrator | 2025-05-17 00:53:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:53.270432 | orchestrator | 2025-05-17 00:53:53 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:53.270816 | orchestrator | 2025-05-17 00:53:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:53.272849 | orchestrator | 2025-05-17 00:53:53 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:53.272924 | orchestrator | 2025-05-17 00:53:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:56.345408 | orchestrator | 2025-05-17 00:53:56 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:56.346374 | orchestrator | 2025-05-17 00:53:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:56.348151 | orchestrator | 2025-05-17 00:53:56 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:56.348188 | orchestrator | 2025-05-17 00:53:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:53:59.399470 | orchestrator | 2025-05-17 00:53:59 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:53:59.402670 | orchestrator | 2025-05-17 00:53:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:53:59.405287 | orchestrator | 2025-05-17 00:53:59 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:53:59.405338 | orchestrator | 2025-05-17 00:53:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:02.447377 | orchestrator | 2025-05-17 00:54:02 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:02.447651 | orchestrator | 2025-05-17 00:54:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:02.448450 | orchestrator | 2025-05-17 00:54:02 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:02.448487 | orchestrator | 2025-05-17 00:54:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:05.505578 | orchestrator | 2025-05-17 00:54:05 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:05.506167 | orchestrator | 2025-05-17 00:54:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:05.506854 | orchestrator | 2025-05-17 00:54:05 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:05.506880 | orchestrator | 2025-05-17 00:54:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:08.557633 | orchestrator | 2025-05-17 00:54:08 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:08.558893 | orchestrator | 2025-05-17 00:54:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:08.561765 | orchestrator | 2025-05-17 00:54:08 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:08.562121 | orchestrator | 2025-05-17 00:54:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:11.622411 | orchestrator | 2025-05-17 00:54:11 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:11.625141 | orchestrator | 2025-05-17 00:54:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:11.627775 | orchestrator | 2025-05-17 00:54:11 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:11.627824 | orchestrator | 2025-05-17 00:54:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:14.682102 | orchestrator | 2025-05-17 00:54:14 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:14.684254 | orchestrator | 2025-05-17 00:54:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:14.686596 | orchestrator | 2025-05-17 00:54:14 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:14.687248 | orchestrator | 2025-05-17 00:54:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:17.739065 | orchestrator | 2025-05-17 00:54:17 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:17.742154 | orchestrator | 2025-05-17 00:54:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:17.743485 | orchestrator | 2025-05-17 00:54:17 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:17.743563 | orchestrator | 2025-05-17 00:54:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:20.813031 | orchestrator | 2025-05-17 00:54:20 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:20.813134 | orchestrator | 2025-05-17 00:54:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:20.814774 | orchestrator | 2025-05-17 00:54:20 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:20.814820 | orchestrator | 2025-05-17 00:54:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:23.851742 | orchestrator | 2025-05-17 00:54:23 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:23.853633 | orchestrator | 2025-05-17 00:54:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:23.854408 | orchestrator | 2025-05-17 00:54:23 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:23.854437 | orchestrator | 2025-05-17 00:54:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:26.900900 | orchestrator | 2025-05-17 00:54:26 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:26.902294 | orchestrator | 2025-05-17 00:54:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:26.905829 | orchestrator | 2025-05-17 00:54:26 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:26.905862 | orchestrator | 2025-05-17 00:54:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:29.955544 | orchestrator | 2025-05-17 00:54:29 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:29.955776 | orchestrator | 2025-05-17 00:54:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:29.956751 | orchestrator | 2025-05-17 00:54:29 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:29.956965 | orchestrator | 2025-05-17 00:54:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:33.019527 | orchestrator | 2025-05-17 00:54:33 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:33.020894 | orchestrator | 2025-05-17 00:54:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:33.022151 | orchestrator | 2025-05-17 00:54:33 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:33.022174 | orchestrator | 2025-05-17 00:54:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:36.073819 | orchestrator | 2025-05-17 00:54:36 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:36.073907 | orchestrator | 2025-05-17 00:54:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:36.073917 | orchestrator | 2025-05-17 00:54:36 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:36.073925 | orchestrator | 2025-05-17 00:54:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:39.118980 | orchestrator | 2025-05-17 00:54:39 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:39.119947 | orchestrator | 2025-05-17 00:54:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:39.123260 | orchestrator | 2025-05-17 00:54:39 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:39.123333 | orchestrator | 2025-05-17 00:54:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:42.173184 | orchestrator | 2025-05-17 00:54:42 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:42.173673 | orchestrator | 2025-05-17 00:54:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:42.175415 | orchestrator | 2025-05-17 00:54:42 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:42.175460 | orchestrator | 2025-05-17 00:54:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:45.228314 | orchestrator | 2025-05-17 00:54:45 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:45.230064 | orchestrator | 2025-05-17 00:54:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:45.232038 | orchestrator | 2025-05-17 00:54:45 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:45.232418 | orchestrator | 2025-05-17 00:54:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:48.287272 | orchestrator | 2025-05-17 00:54:48 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:48.287412 | orchestrator | 2025-05-17 00:54:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:48.290374 | orchestrator | 2025-05-17 00:54:48 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:48.290547 | orchestrator | 2025-05-17 00:54:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:51.354696 | orchestrator | 2025-05-17 00:54:51 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:51.357444 | orchestrator | 2025-05-17 00:54:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:51.360278 | orchestrator | 2025-05-17 00:54:51 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:51.361077 | orchestrator | 2025-05-17 00:54:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:54.411401 | orchestrator | 2025-05-17 00:54:54 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:54.412367 | orchestrator | 2025-05-17 00:54:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:54.415230 | orchestrator | 2025-05-17 00:54:54 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:54.422211 | orchestrator | 2025-05-17 00:54:54 | INFO  | Task 21dcae63-d840-47b2-9cc3-9e67420cc576 is in state STARTED 2025-05-17 00:54:54.422237 | orchestrator | 2025-05-17 00:54:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:54:57.475383 | orchestrator | 2025-05-17 00:54:57 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:54:57.477507 | orchestrator | 2025-05-17 00:54:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:54:57.480130 | orchestrator | 2025-05-17 00:54:57 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:54:57.482489 | orchestrator | 2025-05-17 00:54:57 | INFO  | Task 21dcae63-d840-47b2-9cc3-9e67420cc576 is in state STARTED 2025-05-17 00:54:57.482506 | orchestrator | 2025-05-17 00:54:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:00.529543 | orchestrator | 2025-05-17 00:55:00 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:55:00.529649 | orchestrator | 2025-05-17 00:55:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:00.531103 | orchestrator | 2025-05-17 00:55:00 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:00.531687 | orchestrator | 2025-05-17 00:55:00 | INFO  | Task 21dcae63-d840-47b2-9cc3-9e67420cc576 is in state STARTED 2025-05-17 00:55:00.531708 | orchestrator | 2025-05-17 00:55:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:03.576492 | orchestrator | 2025-05-17 00:55:03 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:55:03.578206 | orchestrator | 2025-05-17 00:55:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:03.579960 | orchestrator | 2025-05-17 00:55:03 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:03.580947 | orchestrator | 2025-05-17 00:55:03 | INFO  | Task 21dcae63-d840-47b2-9cc3-9e67420cc576 is in state SUCCESS 2025-05-17 00:55:03.581301 | orchestrator | 2025-05-17 00:55:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:06.627495 | orchestrator | 2025-05-17 00:55:06 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:55:06.629753 | orchestrator | 2025-05-17 00:55:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:06.632096 | orchestrator | 2025-05-17 00:55:06 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:06.632148 | orchestrator | 2025-05-17 00:55:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:09.671833 | orchestrator | 2025-05-17 00:55:09 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:55:09.675217 | orchestrator | 2025-05-17 00:55:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:09.679675 | orchestrator | 2025-05-17 00:55:09 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:09.680234 | orchestrator | 2025-05-17 00:55:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:12.728679 | orchestrator | 2025-05-17 00:55:12 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state STARTED 2025-05-17 00:55:12.729757 | orchestrator | 2025-05-17 00:55:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:12.730405 | orchestrator | 2025-05-17 00:55:12 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:12.730768 | orchestrator | 2025-05-17 00:55:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:15.798448 | orchestrator | 2025-05-17 00:55:15 | INFO  | Task e3ddd45a-af55-4967-9fe3-1729f59882a8 is in state SUCCESS 2025-05-17 00:55:15.801332 | orchestrator | 2025-05-17 00:55:15.801407 | orchestrator | None 2025-05-17 00:55:15.801422 | orchestrator | 2025-05-17 00:55:15.801434 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 00:55:15.801445 | orchestrator | 2025-05-17 00:55:15.801480 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 00:55:15.801568 | orchestrator | Saturday 17 May 2025 00:48:09 +0000 (0:00:00.641) 0:00:00.641 ********** 2025-05-17 00:55:15.801583 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:55:15.801597 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:55:15.801608 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:55:15.801630 | orchestrator | 2025-05-17 00:55:15.801641 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 00:55:15.801653 | orchestrator | Saturday 17 May 2025 00:48:10 +0000 (0:00:00.836) 0:00:01.478 ********** 2025-05-17 00:55:15.801664 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-17 00:55:15.801676 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-17 00:55:15.801687 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-17 00:55:15.801697 | orchestrator | 2025-05-17 00:55:15.801708 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-17 00:55:15.801719 | orchestrator | 2025-05-17 00:55:15.801750 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-17 00:55:15.801761 | orchestrator | Saturday 17 May 2025 00:48:10 +0000 (0:00:00.379) 0:00:01.857 ********** 2025-05-17 00:55:15.801772 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.801805 | orchestrator | 2025-05-17 00:55:15.801817 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-17 00:55:15.801827 | orchestrator | Saturday 17 May 2025 00:48:11 +0000 (0:00:00.839) 0:00:02.697 ********** 2025-05-17 00:55:15.801838 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:55:15.801849 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:55:15.801860 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:55:15.801871 | orchestrator | 2025-05-17 00:55:15.801882 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-17 00:55:15.801892 | orchestrator | Saturday 17 May 2025 00:48:12 +0000 (0:00:00.991) 0:00:03.688 ********** 2025-05-17 00:55:15.801903 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.801914 | orchestrator | 2025-05-17 00:55:15.801926 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-17 00:55:15.801939 | orchestrator | Saturday 17 May 2025 00:48:13 +0000 (0:00:00.700) 0:00:04.389 ********** 2025-05-17 00:55:15.801951 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:55:15.801964 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:55:15.801976 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:55:15.802081 | orchestrator | 2025-05-17 00:55:15.802098 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-17 00:55:15.802110 | orchestrator | Saturday 17 May 2025 00:48:14 +0000 (0:00:00.771) 0:00:05.161 ********** 2025-05-17 00:55:15.802123 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-17 00:55:15.802136 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-17 00:55:15.802148 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-17 00:55:15.802161 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-17 00:55:15.802214 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-17 00:55:15.802227 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-17 00:55:15.802240 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-17 00:55:15.802253 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-17 00:55:15.802266 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-17 00:55:15.802279 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-17 00:55:15.802290 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-17 00:55:15.802300 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-17 00:55:15.802311 | orchestrator | 2025-05-17 00:55:15.802322 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-17 00:55:15.802334 | orchestrator | Saturday 17 May 2025 00:48:19 +0000 (0:00:04.984) 0:00:10.146 ********** 2025-05-17 00:55:15.802345 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-17 00:55:15.802356 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-17 00:55:15.802367 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-17 00:55:15.802378 | orchestrator | 2025-05-17 00:55:15.802389 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-17 00:55:15.802400 | orchestrator | Saturday 17 May 2025 00:48:20 +0000 (0:00:01.653) 0:00:11.799 ********** 2025-05-17 00:55:15.802410 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-17 00:55:15.802421 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-17 00:55:15.802432 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-17 00:55:15.802443 | orchestrator | 2025-05-17 00:55:15.802462 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-17 00:55:15.802473 | orchestrator | Saturday 17 May 2025 00:48:23 +0000 (0:00:02.581) 0:00:14.381 ********** 2025-05-17 00:55:15.802484 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-17 00:55:15.802495 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.802520 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-17 00:55:15.802531 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.802542 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-17 00:55:15.802553 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.802564 | orchestrator | 2025-05-17 00:55:15.802575 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-17 00:55:15.802586 | orchestrator | Saturday 17 May 2025 00:48:24 +0000 (0:00:00.835) 0:00:15.216 ********** 2025-05-17 00:55:15.802605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-17 00:55:15.802623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-17 00:55:15.802635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-17 00:55:15.802646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-17 00:55:15.802659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-17 00:55:15.802685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-17 00:55:15.802703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-17 00:55:15.802716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-17 00:55:15.802727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-17 00:55:15.802738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-17 00:55:15.802750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-17 00:55:15.802761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-17 00:55:15.802778 | orchestrator | 2025-05-17 00:55:15.802789 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-17 00:55:15.802800 | orchestrator | Saturday 17 May 2025 00:48:27 +0000 (0:00:02.769) 0:00:17.986 ********** 2025-05-17 00:55:15.802811 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.802822 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.802833 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.802844 | orchestrator | 2025-05-17 00:55:15.802862 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-17 00:55:15.802881 | orchestrator | Saturday 17 May 2025 00:48:30 +0000 (0:00:03.199) 0:00:21.185 ********** 2025-05-17 00:55:15.802949 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-17 00:55:15.803045 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-17 00:55:15.803065 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-17 00:55:15.803083 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-17 00:55:15.803170 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-17 00:55:15.803200 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-17 00:55:15.803237 | orchestrator | 2025-05-17 00:55:15.803267 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-17 00:55:15.803285 | orchestrator | Saturday 17 May 2025 00:48:34 +0000 (0:00:04.310) 0:00:25.496 ********** 2025-05-17 00:55:15.803302 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.803319 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.803338 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.803354 | orchestrator | 2025-05-17 00:55:15.803370 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-17 00:55:15.803386 | orchestrator | Saturday 17 May 2025 00:48:35 +0000 (0:00:01.214) 0:00:26.710 ********** 2025-05-17 00:55:15.803402 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:55:15.803418 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:55:15.803435 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:55:15.803453 | orchestrator | 2025-05-17 00:55:15.803471 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-17 00:55:15.803488 | orchestrator | Saturday 17 May 2025 00:48:37 +0000 (0:00:01.821) 0:00:28.531 ********** 2025-05-17 00:55:15.803508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-17 00:55:15.803530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-17 00:55:15.803557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-17 00:55:15.803570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-17 00:55:15.803595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-17 00:55:15.803614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-17 00:55:15.803627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-17 00:55:15.803638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-17 00:55:15.803657 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.803668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-17 00:55:15.803680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-17 00:55:15.803691 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.803702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-17 00:55:15.803726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-17 00:55:15.803738 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.803749 | orchestrator | 2025-05-17 00:55:15.803761 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-17 00:55:15.803772 | orchestrator | Saturday 17 May 2025 00:48:39 +0000 (0:00:02.047) 0:00:30.579 ********** 2025-05-17 00:55:15.803783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-17 00:55:15.803796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-17 00:55:15.803814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-17 00:55:15.803826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-17 00:55:15.803844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-17 00:55:15.803861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-17 00:55:15.803873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-17 00:55:15.803885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-17 00:55:15.803910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-17 00:55:15.803922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-17 00:55:15.803933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-17 00:55:15.803952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-17 00:55:15.803964 | orchestrator | 2025-05-17 00:55:15.803975 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-17 00:55:15.804017 | orchestrator | Saturday 17 May 2025 00:48:43 +0000 (0:00:03.679) 0:00:34.258 ********** 2025-05-17 00:55:15.804036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-17 00:55:15.804048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-17 00:55:15.804067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-17 00:55:15.804079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-17 00:55:15.804090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-17 00:55:15.804110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-17 00:55:15.804128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-17 00:55:15.804139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-17 00:55:15.804158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-17 00:55:15.804170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-17 00:55:15.804182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-17 00:55:15.804193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-17 00:55:15.804204 | orchestrator | 2025-05-17 00:55:15.804215 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-17 00:55:15.804226 | orchestrator | Saturday 17 May 2025 00:48:46 +0000 (0:00:02.881) 0:00:37.139 ********** 2025-05-17 00:55:15.804244 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-17 00:55:15.804256 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-17 00:55:15.804267 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-17 00:55:15.804278 | orchestrator | 2025-05-17 00:55:15.804288 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-17 00:55:15.804304 | orchestrator | Saturday 17 May 2025 00:48:47 +0000 (0:00:01.757) 0:00:38.897 ********** 2025-05-17 00:55:15.804315 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-17 00:55:15.804326 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-17 00:55:15.804345 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-17 00:55:15.804356 | orchestrator | 2025-05-17 00:55:15.804367 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-17 00:55:15.804378 | orchestrator | Saturday 17 May 2025 00:48:50 +0000 (0:00:02.356) 0:00:41.254 ********** 2025-05-17 00:55:15.804388 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.804399 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.804410 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.804421 | orchestrator | 2025-05-17 00:55:15.804432 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-17 00:55:15.804442 | orchestrator | Saturday 17 May 2025 00:48:51 +0000 (0:00:00.968) 0:00:42.222 ********** 2025-05-17 00:55:15.804453 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-17 00:55:15.804466 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-17 00:55:15.804477 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-17 00:55:15.804487 | orchestrator | 2025-05-17 00:55:15.804498 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-17 00:55:15.804509 | orchestrator | Saturday 17 May 2025 00:48:54 +0000 (0:00:03.178) 0:00:45.401 ********** 2025-05-17 00:55:15.804520 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-17 00:55:15.804531 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-17 00:55:15.804542 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-17 00:55:15.804553 | orchestrator | 2025-05-17 00:55:15.804564 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-17 00:55:15.804575 | orchestrator | Saturday 17 May 2025 00:48:58 +0000 (0:00:03.723) 0:00:49.125 ********** 2025-05-17 00:55:15.804585 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-17 00:55:15.804601 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-17 00:55:15.804620 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-17 00:55:15.804637 | orchestrator | 2025-05-17 00:55:15.804655 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-17 00:55:15.804673 | orchestrator | Saturday 17 May 2025 00:48:59 +0000 (0:00:01.580) 0:00:50.706 ********** 2025-05-17 00:55:15.804691 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-17 00:55:15.804709 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-17 00:55:15.804729 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-17 00:55:15.804741 | orchestrator | 2025-05-17 00:55:15.804752 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-17 00:55:15.804763 | orchestrator | Saturday 17 May 2025 00:49:01 +0000 (0:00:01.622) 0:00:52.329 ********** 2025-05-17 00:55:15.804773 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.804784 | orchestrator | 2025-05-17 00:55:15.804795 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-17 00:55:15.804805 | orchestrator | Saturday 17 May 2025 00:49:02 +0000 (0:00:00.748) 0:00:53.077 ********** 2025-05-17 00:55:15.804817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-17 00:55:15.804852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-17 00:55:15.804865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-17 00:55:15.804877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-17 00:55:15.804888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-17 00:55:15.804900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-17 00:55:15.804911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-17 00:55:15.804938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-17 00:55:15.804956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-17 00:55:15.804967 | orchestrator | 2025-05-17 00:55:15.804978 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-17 00:55:15.805029 | orchestrator | Saturday 17 May 2025 00:49:05 +0000 (0:00:03.132) 0:00:56.210 ********** 2025-05-17 00:55:15.805041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-17 00:55:15.805053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-17 00:55:15.805064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-17 00:55:15.805076 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.805087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-17 00:55:15.805107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-17 00:55:15.805131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-17 00:55:15.805144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-17 00:55:15.805155 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.805166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-17 00:55:15.805178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-17 00:55:15.805189 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.805200 | orchestrator | 2025-05-17 00:55:15.805211 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-17 00:55:15.805222 | orchestrator | Saturday 17 May 2025 00:49:06 +0000 (0:00:00.806) 0:00:57.017 ********** 2025-05-17 00:55:15.805233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-17 00:55:15.805251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-17 00:55:15.805270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-17 00:55:15.805282 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.805298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-17 00:55:15.805311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-17 00:55:15.805322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-17 00:55:15.805333 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.805344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-17 00:55:15.805367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-17 00:55:15.805378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-17 00:55:15.805390 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.805401 | orchestrator | 2025-05-17 00:55:15.805411 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-17 00:55:15.805428 | orchestrator | Saturday 17 May 2025 00:49:07 +0000 (0:00:01.526) 0:00:58.543 ********** 2025-05-17 00:55:15.805440 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-17 00:55:15.805451 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-17 00:55:15.805462 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-17 00:55:15.805473 | orchestrator | 2025-05-17 00:55:15.805484 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-17 00:55:15.805499 | orchestrator | Saturday 17 May 2025 00:49:09 +0000 (0:00:01.839) 0:01:00.382 ********** 2025-05-17 00:55:15.805510 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-17 00:55:15.805521 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-17 00:55:15.805532 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-17 00:55:15.805543 | orchestrator | 2025-05-17 00:55:15.805554 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-17 00:55:15.805565 | orchestrator | Saturday 17 May 2025 00:49:11 +0000 (0:00:02.126) 0:01:02.509 ********** 2025-05-17 00:55:15.805575 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-17 00:55:15.805586 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-17 00:55:15.805597 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-17 00:55:15.805608 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-17 00:55:15.805619 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.805630 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-17 00:55:15.805641 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.805652 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-17 00:55:15.805663 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.805674 | orchestrator | 2025-05-17 00:55:15.805684 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-17 00:55:15.805703 | orchestrator | Saturday 17 May 2025 00:49:13 +0000 (0:00:01.627) 0:01:04.136 ********** 2025-05-17 00:55:15.805714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-17 00:55:15.805726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-17 00:55:15.805737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-17 00:55:15.805756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-17 00:55:15.805774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-17 00:55:15.805786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-17 00:55:15.805804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-17 00:55:15.805816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-17 00:55:15.805827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-17 00:55:15.806207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-17 00:55:15.806238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-17 00:55:15.806249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642', '__omit_place_holder__2887831c5e7105085a505f77a8e047287f8c3642'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-17 00:55:15.806260 | orchestrator | 2025-05-17 00:55:15.806272 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-17 00:55:15.806297 | orchestrator | Saturday 17 May 2025 00:49:16 +0000 (0:00:02.958) 0:01:07.095 ********** 2025-05-17 00:55:15.806308 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.806319 | orchestrator | 2025-05-17 00:55:15.806330 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-17 00:55:15.806341 | orchestrator | Saturday 17 May 2025 00:49:16 +0000 (0:00:00.805) 0:01:07.901 ********** 2025-05-17 00:55:15.806353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-17 00:55:15.806365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.806377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.806398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.806410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-17 00:55:15.806451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.806464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.806476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-17 00:55:15.806487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.806505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.806522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.806534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.806552 | orchestrator | 2025-05-17 00:55:15.806563 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-17 00:55:15.806574 | orchestrator | Saturday 17 May 2025 00:49:22 +0000 (0:00:05.118) 0:01:13.019 ********** 2025-05-17 00:55:15.806585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-17 00:55:15.806597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.806633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.806676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.806689 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.806705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-17 00:55:15.806746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.806759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.806786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.806798 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.806810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-17 00:55:15.806849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.806867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.806887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.806900 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.806912 | orchestrator | 2025-05-17 00:55:15.806925 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-17 00:55:15.806975 | orchestrator | Saturday 17 May 2025 00:49:22 +0000 (0:00:00.590) 0:01:13.610 ********** 2025-05-17 00:55:15.807013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-17 00:55:15.807029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-17 00:55:15.807042 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.807055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-17 00:55:15.807066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-17 00:55:15.807077 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.807088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-17 00:55:15.807099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-17 00:55:15.807110 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.807121 | orchestrator | 2025-05-17 00:55:15.807132 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-17 00:55:15.807143 | orchestrator | Saturday 17 May 2025 00:49:23 +0000 (0:00:00.843) 0:01:14.453 ********** 2025-05-17 00:55:15.807154 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.807166 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.807176 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.807187 | orchestrator | 2025-05-17 00:55:15.807198 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-17 00:55:15.807209 | orchestrator | Saturday 17 May 2025 00:49:24 +0000 (0:00:01.095) 0:01:15.548 ********** 2025-05-17 00:55:15.807219 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.807230 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.807241 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.807252 | orchestrator | 2025-05-17 00:55:15.807263 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-17 00:55:15.807273 | orchestrator | Saturday 17 May 2025 00:49:26 +0000 (0:00:01.720) 0:01:17.268 ********** 2025-05-17 00:55:15.807284 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.807306 | orchestrator | 2025-05-17 00:55:15.807317 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-17 00:55:15.807328 | orchestrator | Saturday 17 May 2025 00:49:27 +0000 (0:00:00.854) 0:01:18.122 ********** 2025-05-17 00:55:15.807353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.807368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.807381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.807393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.807405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.807435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.807448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.807461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.807473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.807484 | orchestrator | 2025-05-17 00:55:15.807495 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-17 00:55:15.807506 | orchestrator | Saturday 17 May 2025 00:49:31 +0000 (0:00:04.734) 0:01:22.857 ********** 2025-05-17 00:55:15.807517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.807543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.807560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.807572 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.807584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.807596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.807607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.807625 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.807944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.807970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.808079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.808095 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.808105 | orchestrator | 2025-05-17 00:55:15.808115 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-17 00:55:15.808125 | orchestrator | Saturday 17 May 2025 00:49:33 +0000 (0:00:01.210) 0:01:24.068 ********** 2025-05-17 00:55:15.808135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-17 00:55:15.808145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-17 00:55:15.808155 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.808165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-17 00:55:15.808175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-17 00:55:15.808194 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.808204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-17 00:55:15.808214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-17 00:55:15.808224 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.808233 | orchestrator | 2025-05-17 00:55:15.808243 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-17 00:55:15.808252 | orchestrator | Saturday 17 May 2025 00:49:34 +0000 (0:00:01.236) 0:01:25.305 ********** 2025-05-17 00:55:15.808262 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.808271 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.808281 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.808291 | orchestrator | 2025-05-17 00:55:15.808300 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-17 00:55:15.808343 | orchestrator | Saturday 17 May 2025 00:49:35 +0000 (0:00:01.374) 0:01:26.680 ********** 2025-05-17 00:55:15.808354 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.808364 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.808374 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.808383 | orchestrator | 2025-05-17 00:55:15.808393 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-17 00:55:15.808403 | orchestrator | Saturday 17 May 2025 00:49:38 +0000 (0:00:02.670) 0:01:29.350 ********** 2025-05-17 00:55:15.808412 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.808422 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.808432 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.808441 | orchestrator | 2025-05-17 00:55:15.808459 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-17 00:55:15.808469 | orchestrator | Saturday 17 May 2025 00:49:38 +0000 (0:00:00.267) 0:01:29.618 ********** 2025-05-17 00:55:15.808479 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.808488 | orchestrator | 2025-05-17 00:55:15.808497 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-17 00:55:15.808507 | orchestrator | Saturday 17 May 2025 00:49:39 +0000 (0:00:00.972) 0:01:30.590 ********** 2025-05-17 00:55:15.808523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-17 00:55:15.808534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-17 00:55:15.808554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-17 00:55:15.808567 | orchestrator | 2025-05-17 00:55:15.808578 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-17 00:55:15.808629 | orchestrator | Saturday 17 May 2025 00:49:43 +0000 (0:00:03.976) 0:01:34.566 ********** 2025-05-17 00:55:15.808641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-17 00:55:15.808651 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.808706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-17 00:55:15.808716 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.808725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-17 00:55:15.808738 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.808746 | orchestrator | 2025-05-17 00:55:15.808754 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-17 00:55:15.808762 | orchestrator | Saturday 17 May 2025 00:49:46 +0000 (0:00:02.797) 0:01:37.364 ********** 2025-05-17 00:55:15.808771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-17 00:55:15.808781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-17 00:55:15.808790 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.808798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-17 00:55:15.808806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-17 00:55:15.808815 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.808823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-17 00:55:15.808835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-17 00:55:15.808844 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.808852 | orchestrator | 2025-05-17 00:55:15.808860 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-17 00:55:15.808868 | orchestrator | Saturday 17 May 2025 00:49:48 +0000 (0:00:01.636) 0:01:39.000 ********** 2025-05-17 00:55:15.808879 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.808887 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.808895 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.808903 | orchestrator | 2025-05-17 00:55:15.808911 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-17 00:55:15.808919 | orchestrator | Saturday 17 May 2025 00:49:48 +0000 (0:00:00.835) 0:01:39.836 ********** 2025-05-17 00:55:15.808927 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.808940 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.808947 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.808955 | orchestrator | 2025-05-17 00:55:15.808963 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-17 00:55:15.808970 | orchestrator | Saturday 17 May 2025 00:49:50 +0000 (0:00:01.368) 0:01:41.204 ********** 2025-05-17 00:55:15.808978 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.809013 | orchestrator | 2025-05-17 00:55:15.809021 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-17 00:55:15.809029 | orchestrator | Saturday 17 May 2025 00:49:51 +0000 (0:00:00.941) 0:01:42.145 ********** 2025-05-17 00:55:15.809038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.809048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.809093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.809144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809175 | orchestrator | 2025-05-17 00:55:15.809183 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-17 00:55:15.809191 | orchestrator | Saturday 17 May 2025 00:49:55 +0000 (0:00:04.330) 0:01:46.476 ********** 2025-05-17 00:55:15.809200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.809208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809248 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.809256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.809265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809303 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.809312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.809320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809345 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.809353 | orchestrator | 2025-05-17 00:55:15.809366 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-17 00:55:15.809374 | orchestrator | Saturday 17 May 2025 00:49:56 +0000 (0:00:01.017) 0:01:47.493 ********** 2025-05-17 00:55:15.809382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-17 00:55:15.809573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-17 00:55:15.809585 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.809594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-17 00:55:15.809606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-17 00:55:15.809614 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.809623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-17 00:55:15.809631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-17 00:55:15.809638 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.809646 | orchestrator | 2025-05-17 00:55:15.809654 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-17 00:55:15.809662 | orchestrator | Saturday 17 May 2025 00:49:57 +0000 (0:00:00.993) 0:01:48.487 ********** 2025-05-17 00:55:15.809670 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.809677 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.809685 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.809693 | orchestrator | 2025-05-17 00:55:15.809700 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-17 00:55:15.809708 | orchestrator | Saturday 17 May 2025 00:49:59 +0000 (0:00:01.509) 0:01:49.996 ********** 2025-05-17 00:55:15.809716 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.809724 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.809731 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.809739 | orchestrator | 2025-05-17 00:55:15.809747 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-17 00:55:15.809754 | orchestrator | Saturday 17 May 2025 00:50:01 +0000 (0:00:02.067) 0:01:52.064 ********** 2025-05-17 00:55:15.809762 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.809770 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.809778 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.809785 | orchestrator | 2025-05-17 00:55:15.809793 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-17 00:55:15.809801 | orchestrator | Saturday 17 May 2025 00:50:01 +0000 (0:00:00.280) 0:01:52.345 ********** 2025-05-17 00:55:15.809808 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.809816 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.809824 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.809832 | orchestrator | 2025-05-17 00:55:15.809840 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-17 00:55:15.809847 | orchestrator | Saturday 17 May 2025 00:50:01 +0000 (0:00:00.436) 0:01:52.781 ********** 2025-05-17 00:55:15.809855 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.809863 | orchestrator | 2025-05-17 00:55:15.809877 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-17 00:55:15.809884 | orchestrator | Saturday 17 May 2025 00:50:02 +0000 (0:00:00.982) 0:01:53.764 ********** 2025-05-17 00:55:15.809893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 00:55:15.809907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 00:55:15.809920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-17 00:55:15.809929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-17 00:55:15.809946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.809971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 00:55:15.810540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-17 00:55:15.810550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810602 | orchestrator | 2025-05-17 00:55:15.810615 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-17 00:55:15.810624 | orchestrator | Saturday 17 May 2025 00:50:07 +0000 (0:00:04.740) 0:01:58.504 ********** 2025-05-17 00:55:15.810649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 00:55:15.810658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-17 00:55:15.810672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810722 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.810731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 00:55:15.810745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-17 00:55:15.810753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810804 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.810812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 00:55:15.810827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-17 00:55:15.810835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.810891 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.810899 | orchestrator | 2025-05-17 00:55:15.810907 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-17 00:55:15.810915 | orchestrator | Saturday 17 May 2025 00:50:09 +0000 (0:00:01.470) 0:01:59.975 ********** 2025-05-17 00:55:15.810924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-17 00:55:15.810932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-17 00:55:15.810940 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.810948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-17 00:55:15.810957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-17 00:55:15.810964 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.810972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-17 00:55:15.810980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-17 00:55:15.811002 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.811010 | orchestrator | 2025-05-17 00:55:15.811018 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-17 00:55:15.811026 | orchestrator | Saturday 17 May 2025 00:50:10 +0000 (0:00:01.643) 0:02:01.619 ********** 2025-05-17 00:55:15.811034 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.811052 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.811060 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.811068 | orchestrator | 2025-05-17 00:55:15.811076 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-17 00:55:15.811084 | orchestrator | Saturday 17 May 2025 00:50:11 +0000 (0:00:01.215) 0:02:02.834 ********** 2025-05-17 00:55:15.811092 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.811100 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.811107 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.811115 | orchestrator | 2025-05-17 00:55:15.811123 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-17 00:55:15.811131 | orchestrator | Saturday 17 May 2025 00:50:14 +0000 (0:00:02.236) 0:02:05.071 ********** 2025-05-17 00:55:15.811139 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.811147 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.811155 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.811162 | orchestrator | 2025-05-17 00:55:15.811170 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-17 00:55:15.812902 | orchestrator | Saturday 17 May 2025 00:50:14 +0000 (0:00:00.526) 0:02:05.597 ********** 2025-05-17 00:55:15.812926 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.812945 | orchestrator | 2025-05-17 00:55:15.812952 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-17 00:55:15.812959 | orchestrator | Saturday 17 May 2025 00:50:15 +0000 (0:00:01.213) 0:02:06.811 ********** 2025-05-17 00:55:15.812975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-17 00:55:15.813003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-17 00:55:15.813023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-17 00:55:15.813037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-17 00:55:15.813054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-17 00:55:15.813071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-17 00:55:15.813078 | orchestrator | 2025-05-17 00:55:15.813085 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-17 00:55:15.813092 | orchestrator | Saturday 17 May 2025 00:50:21 +0000 (0:00:06.042) 0:02:12.853 ********** 2025-05-17 00:55:15.813107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-17 00:55:15.813120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-17 00:55:15.813129 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.813141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-17 00:55:15.813156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-17 00:55:15.813164 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.813172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-17 00:55:15.813191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-17 00:55:15.813199 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.813206 | orchestrator | 2025-05-17 00:55:15.813213 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-17 00:55:15.813219 | orchestrator | Saturday 17 May 2025 00:50:25 +0000 (0:00:03.153) 0:02:16.007 ********** 2025-05-17 00:55:15.813226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-17 00:55:15.813234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-17 00:55:15.813241 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.813248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-17 00:55:15.813263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-17 00:55:15.813271 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.813281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-17 00:55:15.813288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-17 00:55:15.813295 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.813302 | orchestrator | 2025-05-17 00:55:15.813309 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-17 00:55:15.813315 | orchestrator | Saturday 17 May 2025 00:50:29 +0000 (0:00:04.313) 0:02:20.320 ********** 2025-05-17 00:55:15.813322 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.813329 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.813336 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.813342 | orchestrator | 2025-05-17 00:55:15.813349 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-17 00:55:15.813356 | orchestrator | Saturday 17 May 2025 00:50:30 +0000 (0:00:01.026) 0:02:21.347 ********** 2025-05-17 00:55:15.813362 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.813369 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.813376 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.813382 | orchestrator | 2025-05-17 00:55:15.813389 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-17 00:55:15.813395 | orchestrator | Saturday 17 May 2025 00:50:32 +0000 (0:00:01.765) 0:02:23.112 ********** 2025-05-17 00:55:15.813402 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.813409 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.813415 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.813422 | orchestrator | 2025-05-17 00:55:15.813428 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-17 00:55:15.813435 | orchestrator | Saturday 17 May 2025 00:50:32 +0000 (0:00:00.356) 0:02:23.468 ********** 2025-05-17 00:55:15.813442 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.813448 | orchestrator | 2025-05-17 00:55:15.813455 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-17 00:55:15.813461 | orchestrator | Saturday 17 May 2025 00:50:33 +0000 (0:00:01.140) 0:02:24.609 ********** 2025-05-17 00:55:15.813468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 00:55:15.813480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 00:55:15.813492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 00:55:15.813499 | orchestrator | 2025-05-17 00:55:15.813506 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-17 00:55:15.813516 | orchestrator | Saturday 17 May 2025 00:50:38 +0000 (0:00:04.841) 0:02:29.450 ********** 2025-05-17 00:55:15.813523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-17 00:55:15.813530 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.813537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-17 00:55:15.813544 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.813550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-17 00:55:15.813562 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.813568 | orchestrator | 2025-05-17 00:55:15.813575 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-17 00:55:15.813582 | orchestrator | Saturday 17 May 2025 00:50:38 +0000 (0:00:00.421) 0:02:29.871 ********** 2025-05-17 00:55:15.813588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-17 00:55:15.813596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-17 00:55:15.813603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-17 00:55:15.813610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-17 00:55:15.813617 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.813623 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.813630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-17 00:55:15.813640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-17 00:55:15.813647 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.813654 | orchestrator | 2025-05-17 00:55:15.813660 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-17 00:55:15.813667 | orchestrator | Saturday 17 May 2025 00:50:39 +0000 (0:00:00.739) 0:02:30.611 ********** 2025-05-17 00:55:15.813674 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.813680 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.813687 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.813694 | orchestrator | 2025-05-17 00:55:15.813700 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-17 00:55:15.813726 | orchestrator | Saturday 17 May 2025 00:50:40 +0000 (0:00:01.112) 0:02:31.723 ********** 2025-05-17 00:55:15.813734 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.813741 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.813747 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.813857 | orchestrator | 2025-05-17 00:55:15.813865 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-17 00:55:15.813872 | orchestrator | Saturday 17 May 2025 00:50:42 +0000 (0:00:01.903) 0:02:33.627 ********** 2025-05-17 00:55:15.813878 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.813885 | orchestrator | 2025-05-17 00:55:15.813892 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-05-17 00:55:15.813898 | orchestrator | Saturday 17 May 2025 00:50:43 +0000 (0:00:01.196) 0:02:34.824 ********** 2025-05-17 00:55:15.813906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.813919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.813926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.813942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.813950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.813958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.813971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.813978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.814001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.814008 | orchestrator | 2025-05-17 00:55:15.814050 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-05-17 00:55:15.814058 | orchestrator | Saturday 17 May 2025 00:50:51 +0000 (0:00:07.912) 0:02:42.737 ********** 2025-05-17 00:55:15.814069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.814081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.814088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.814096 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.814103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.814114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.814125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.814136 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.814143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.814150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.814157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.814164 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.814171 | orchestrator | 2025-05-17 00:55:15.814178 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-05-17 00:55:15.814185 | orchestrator | Saturday 17 May 2025 00:50:52 +0000 (0:00:01.055) 0:02:43.792 ********** 2025-05-17 00:55:15.814191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-17 00:55:15.814200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-17 00:55:15.814207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-17 00:55:15.814218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-17 00:55:15.814226 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.814233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-17 00:55:15.814247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-17 00:55:15.814254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-17 00:55:15.814261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-17 00:55:15.814268 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.814274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-17 00:55:15.814281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-17 00:55:15.814288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-17 00:55:15.814295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-17 00:55:15.814302 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.814308 | orchestrator | 2025-05-17 00:55:15.814315 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-05-17 00:55:15.814322 | orchestrator | Saturday 17 May 2025 00:50:54 +0000 (0:00:01.334) 0:02:45.126 ********** 2025-05-17 00:55:15.814328 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.814335 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.814341 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.814348 | orchestrator | 2025-05-17 00:55:15.814355 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-05-17 00:55:15.814361 | orchestrator | Saturday 17 May 2025 00:50:55 +0000 (0:00:01.356) 0:02:46.483 ********** 2025-05-17 00:55:15.814368 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.814375 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.814381 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.814388 | orchestrator | 2025-05-17 00:55:15.814395 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-17 00:55:15.814402 | orchestrator | Saturday 17 May 2025 00:50:57 +0000 (0:00:02.267) 0:02:48.750 ********** 2025-05-17 00:55:15.814408 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.814415 | orchestrator | 2025-05-17 00:55:15.814421 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-17 00:55:15.814428 | orchestrator | Saturday 17 May 2025 00:50:58 +0000 (0:00:01.087) 0:02:49.837 ********** 2025-05-17 00:55:15.814445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-17 00:55:15.814471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-17 00:55:15.814498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-17 00:55:15.814534 | orchestrator | 2025-05-17 00:55:15.814541 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-17 00:55:15.814548 | orchestrator | Saturday 17 May 2025 00:51:02 +0000 (0:00:03.840) 0:02:53.677 ********** 2025-05-17 00:55:15.814556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-17 00:55:15.814567 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.814586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-17 00:55:15.814594 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.814602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-17 00:55:15.814614 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.814621 | orchestrator | 2025-05-17 00:55:15.814631 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-17 00:55:15.814638 | orchestrator | Saturday 17 May 2025 00:51:03 +0000 (0:00:00.996) 0:02:54.674 ********** 2025-05-17 00:55:15.814657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-17 00:55:15.814667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-17 00:55:15.814676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-17 00:55:15.814683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-17 00:55:15.814690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-17 00:55:15.814697 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.814704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-17 00:55:15.814711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-17 00:55:15.814718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-17 00:55:15.814725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-17 00:55:15.814739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-17 00:55:15.814746 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.814753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-17 00:55:15.814760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-17 00:55:15.814771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-17 00:55:15.814842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-17 00:55:15.814851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-17 00:55:15.814858 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.814865 | orchestrator | 2025-05-17 00:55:15.814872 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-17 00:55:15.814878 | orchestrator | Saturday 17 May 2025 00:51:04 +0000 (0:00:01.196) 0:02:55.870 ********** 2025-05-17 00:55:15.814885 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.814892 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.814910 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.814917 | orchestrator | 2025-05-17 00:55:15.814924 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-17 00:55:15.814931 | orchestrator | Saturday 17 May 2025 00:51:06 +0000 (0:00:01.483) 0:02:57.354 ********** 2025-05-17 00:55:15.814937 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.814944 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.814951 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.814957 | orchestrator | 2025-05-17 00:55:15.814964 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-17 00:55:15.814971 | orchestrator | Saturday 17 May 2025 00:51:08 +0000 (0:00:02.178) 0:02:59.532 ********** 2025-05-17 00:55:15.814977 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.815000 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.815008 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.815015 | orchestrator | 2025-05-17 00:55:15.815022 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-17 00:55:15.815043 | orchestrator | Saturday 17 May 2025 00:51:09 +0000 (0:00:00.433) 0:02:59.965 ********** 2025-05-17 00:55:15.815050 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.815057 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.815064 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.815070 | orchestrator | 2025-05-17 00:55:15.815077 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-17 00:55:15.815084 | orchestrator | Saturday 17 May 2025 00:51:09 +0000 (0:00:00.262) 0:03:00.227 ********** 2025-05-17 00:55:15.815095 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.815102 | orchestrator | 2025-05-17 00:55:15.815109 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-17 00:55:15.815115 | orchestrator | Saturday 17 May 2025 00:51:10 +0000 (0:00:01.296) 0:03:01.524 ********** 2025-05-17 00:55:15.815123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 00:55:15.815131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 00:55:15.815145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-17 00:55:15.815157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 00:55:15.815165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 00:55:15.815188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-17 00:55:15.815196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 00:55:15.815209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 00:55:15.815220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-17 00:55:15.815228 | orchestrator | 2025-05-17 00:55:15.815235 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-17 00:55:15.815242 | orchestrator | Saturday 17 May 2025 00:51:14 +0000 (0:00:03.740) 0:03:05.264 ********** 2025-05-17 00:55:15.815250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-17 00:55:15.815262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 00:55:15.815269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-17 00:55:15.815276 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.815289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-17 00:55:15.815328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 00:55:15.815336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-17 00:55:15.815350 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.815358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-17 00:55:15.815365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 00:55:15.815372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-17 00:55:15.815379 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.815386 | orchestrator | 2025-05-17 00:55:15.815393 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-17 00:55:15.815399 | orchestrator | Saturday 17 May 2025 00:51:15 +0000 (0:00:00.772) 0:03:06.037 ********** 2025-05-17 00:55:15.815411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-17 00:55:15.815421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-17 00:55:15.815429 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.815436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-17 00:55:15.815443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-17 00:55:15.815455 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.815462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-17 00:55:15.815469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-17 00:55:15.815475 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.815482 | orchestrator | 2025-05-17 00:55:15.815489 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-17 00:55:15.815496 | orchestrator | Saturday 17 May 2025 00:51:16 +0000 (0:00:01.173) 0:03:07.211 ********** 2025-05-17 00:55:15.815502 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.815509 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.815516 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.815522 | orchestrator | 2025-05-17 00:55:15.815529 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-17 00:55:15.815535 | orchestrator | Saturday 17 May 2025 00:51:17 +0000 (0:00:01.323) 0:03:08.534 ********** 2025-05-17 00:55:15.815611 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.815619 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.815626 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.815632 | orchestrator | 2025-05-17 00:55:15.815639 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-17 00:55:15.815646 | orchestrator | Saturday 17 May 2025 00:51:19 +0000 (0:00:02.190) 0:03:10.724 ********** 2025-05-17 00:55:15.815652 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.815659 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.815666 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.815672 | orchestrator | 2025-05-17 00:55:15.815679 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-17 00:55:15.815686 | orchestrator | Saturday 17 May 2025 00:51:20 +0000 (0:00:00.303) 0:03:11.027 ********** 2025-05-17 00:55:15.815692 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.815699 | orchestrator | 2025-05-17 00:55:15.815705 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-17 00:55:15.815712 | orchestrator | Saturday 17 May 2025 00:51:21 +0000 (0:00:01.312) 0:03:12.340 ********** 2025-05-17 00:55:15.815719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 00:55:15.815736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.815749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 00:55:15.815756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.815764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 00:55:15.815771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.815778 | orchestrator | 2025-05-17 00:55:15.815785 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-17 00:55:15.815805 | orchestrator | Saturday 17 May 2025 00:51:26 +0000 (0:00:04.635) 0:03:16.975 ********** 2025-05-17 00:55:15.815832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-17 00:55:15.815841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.815848 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.815855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-17 00:55:15.815862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.815870 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.815882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-17 00:55:15.815899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.815907 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.815914 | orchestrator | 2025-05-17 00:55:15.815920 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-17 00:55:15.815927 | orchestrator | Saturday 17 May 2025 00:51:26 +0000 (0:00:00.847) 0:03:17.823 ********** 2025-05-17 00:55:15.815934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-17 00:55:15.815941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-17 00:55:15.815948 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.815955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-17 00:55:15.815962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-17 00:55:15.815968 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.815975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-17 00:55:15.815997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-17 00:55:15.816004 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.816011 | orchestrator | 2025-05-17 00:55:15.816018 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-17 00:55:15.816025 | orchestrator | Saturday 17 May 2025 00:51:28 +0000 (0:00:01.545) 0:03:19.369 ********** 2025-05-17 00:55:15.816031 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.816038 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.816045 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.816051 | orchestrator | 2025-05-17 00:55:15.816058 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-17 00:55:15.816065 | orchestrator | Saturday 17 May 2025 00:51:29 +0000 (0:00:01.326) 0:03:20.695 ********** 2025-05-17 00:55:15.816072 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.816078 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.816090 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.816097 | orchestrator | 2025-05-17 00:55:15.816104 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-17 00:55:15.816111 | orchestrator | Saturday 17 May 2025 00:51:31 +0000 (0:00:02.160) 0:03:22.855 ********** 2025-05-17 00:55:15.816117 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.816124 | orchestrator | 2025-05-17 00:55:15.816131 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-17 00:55:15.816138 | orchestrator | Saturday 17 May 2025 00:51:33 +0000 (0:00:01.171) 0:03:24.026 ********** 2025-05-17 00:55:15.816149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-17 00:55:15.816161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-17 00:55:15.816194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-17 00:55:15.816206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816260 | orchestrator | 2025-05-17 00:55:15.816267 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-17 00:55:15.816274 | orchestrator | Saturday 17 May 2025 00:51:37 +0000 (0:00:04.064) 0:03:28.091 ********** 2025-05-17 00:55:15.816286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-17 00:55:15.816297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816345 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.816352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-17 00:55:15.816359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816389 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.816397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-17 00:55:15.816404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.816431 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.816438 | orchestrator | 2025-05-17 00:55:15.816445 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-17 00:55:15.816452 | orchestrator | Saturday 17 May 2025 00:51:38 +0000 (0:00:01.405) 0:03:29.496 ********** 2025-05-17 00:55:15.816459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-17 00:55:15.816470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-17 00:55:15.816477 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.816484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-17 00:55:15.816494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-17 00:55:15.816501 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.816509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-17 00:55:15.816516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-17 00:55:15.816522 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.816529 | orchestrator | 2025-05-17 00:55:15.816536 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-17 00:55:15.816543 | orchestrator | Saturday 17 May 2025 00:51:39 +0000 (0:00:01.207) 0:03:30.703 ********** 2025-05-17 00:55:15.816550 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.816557 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.816568 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.816574 | orchestrator | 2025-05-17 00:55:15.816581 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-17 00:55:15.816588 | orchestrator | Saturday 17 May 2025 00:51:41 +0000 (0:00:01.397) 0:03:32.101 ********** 2025-05-17 00:55:15.816595 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.816601 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.816608 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.816615 | orchestrator | 2025-05-17 00:55:15.816621 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-17 00:55:15.816628 | orchestrator | Saturday 17 May 2025 00:51:43 +0000 (0:00:02.197) 0:03:34.299 ********** 2025-05-17 00:55:15.816635 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.816641 | orchestrator | 2025-05-17 00:55:15.816648 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-17 00:55:15.816655 | orchestrator | Saturday 17 May 2025 00:51:44 +0000 (0:00:01.367) 0:03:35.666 ********** 2025-05-17 00:55:15.816661 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-17 00:55:15.816668 | orchestrator | 2025-05-17 00:55:15.816675 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-17 00:55:15.816682 | orchestrator | Saturday 17 May 2025 00:51:47 +0000 (0:00:03.094) 0:03:38.761 ********** 2025-05-17 00:55:15.816689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-17 00:55:15.816705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-17 00:55:15.816713 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.816720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-17 00:55:15.816732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-17 00:55:15.816740 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.816759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-17 00:55:15.816772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-17 00:55:15.816779 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.816786 | orchestrator | 2025-05-17 00:55:15.816793 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-17 00:55:15.816800 | orchestrator | Saturday 17 May 2025 00:51:51 +0000 (0:00:03.229) 0:03:41.991 ********** 2025-05-17 00:55:15.816808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-17 00:55:15.816819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-17 00:55:15.816827 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.816838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-17 00:55:15.816850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-17 00:55:15.816858 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.816871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-17 00:55:15.816887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-17 00:55:15.816894 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.816902 | orchestrator | 2025-05-17 00:55:15.816908 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-17 00:55:15.816915 | orchestrator | Saturday 17 May 2025 00:51:54 +0000 (0:00:03.611) 0:03:45.603 ********** 2025-05-17 00:55:15.816922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-17 00:55:15.816929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-17 00:55:15.816937 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.816944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-17 00:55:15.816952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-17 00:55:15.816959 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.816971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-17 00:55:15.817032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-17 00:55:15.817041 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.817048 | orchestrator | 2025-05-17 00:55:15.817055 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-17 00:55:15.817062 | orchestrator | Saturday 17 May 2025 00:51:57 +0000 (0:00:03.251) 0:03:48.854 ********** 2025-05-17 00:55:15.817069 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.817076 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.817082 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.817089 | orchestrator | 2025-05-17 00:55:15.817096 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-17 00:55:15.817103 | orchestrator | Saturday 17 May 2025 00:51:59 +0000 (0:00:01.879) 0:03:50.733 ********** 2025-05-17 00:55:15.817109 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.817116 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.817123 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.817130 | orchestrator | 2025-05-17 00:55:15.817137 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-17 00:55:15.817143 | orchestrator | Saturday 17 May 2025 00:52:01 +0000 (0:00:01.295) 0:03:52.028 ********** 2025-05-17 00:55:15.817150 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.817157 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.817164 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.817170 | orchestrator | 2025-05-17 00:55:15.817177 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-17 00:55:15.817184 | orchestrator | Saturday 17 May 2025 00:52:01 +0000 (0:00:00.453) 0:03:52.481 ********** 2025-05-17 00:55:15.817190 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.817198 | orchestrator | 2025-05-17 00:55:15.817204 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-17 00:55:15.817210 | orchestrator | Saturday 17 May 2025 00:52:02 +0000 (0:00:01.400) 0:03:53.882 ********** 2025-05-17 00:55:15.817217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-17 00:55:15.817224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-17 00:55:15.817244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-17 00:55:15.817252 | orchestrator | 2025-05-17 00:55:15.817258 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-17 00:55:15.817269 | orchestrator | Saturday 17 May 2025 00:52:04 +0000 (0:00:01.804) 0:03:55.687 ********** 2025-05-17 00:55:15.817276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-17 00:55:15.817282 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.817289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-17 00:55:15.817296 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.817303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-17 00:55:15.817315 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.817322 | orchestrator | 2025-05-17 00:55:15.817328 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-17 00:55:15.817334 | orchestrator | Saturday 17 May 2025 00:52:05 +0000 (0:00:00.367) 0:03:56.055 ********** 2025-05-17 00:55:15.817341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-17 00:55:15.817347 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.817354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-17 00:55:15.817361 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.817367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-17 00:55:15.817374 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.817380 | orchestrator | 2025-05-17 00:55:15.817390 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-17 00:55:15.817397 | orchestrator | Saturday 17 May 2025 00:52:06 +0000 (0:00:00.938) 0:03:56.993 ********** 2025-05-17 00:55:15.817403 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.817409 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.817416 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.817422 | orchestrator | 2025-05-17 00:55:15.817428 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-17 00:55:15.817434 | orchestrator | Saturday 17 May 2025 00:52:06 +0000 (0:00:00.816) 0:03:57.809 ********** 2025-05-17 00:55:15.817440 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.817446 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.817456 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.817462 | orchestrator | 2025-05-17 00:55:15.817469 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-17 00:55:15.817475 | orchestrator | Saturday 17 May 2025 00:52:08 +0000 (0:00:01.546) 0:03:59.356 ********** 2025-05-17 00:55:15.817481 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.817488 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.817494 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.817500 | orchestrator | 2025-05-17 00:55:15.817601 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-17 00:55:15.817608 | orchestrator | Saturday 17 May 2025 00:52:08 +0000 (0:00:00.288) 0:03:59.645 ********** 2025-05-17 00:55:15.817614 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.817620 | orchestrator | 2025-05-17 00:55:15.817627 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-17 00:55:15.817633 | orchestrator | Saturday 17 May 2025 00:52:10 +0000 (0:00:01.500) 0:04:01.145 ********** 2025-05-17 00:55:15.817639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 00:55:15.817653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.817660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.817671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.817681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 00:55:15.817688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.817715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.817723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.817729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.817740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 00:55:15.817752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.817759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.817766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.817777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.817784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 00:55:15.817803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 00:55:15.817813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 00:55:15.817820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.817831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.817838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.817844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.817858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 00:55:15.817865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.817876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.817883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.817889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.817896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 00:55:15.817906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.817913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.817931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.817942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.817949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 00:55:15.817956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 00:55:15.817966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.817977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 00:55:15.818005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 00:55:15.818062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.818080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.818086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 00:55:15.818099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.818120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.818131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 00:55:15.818144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 00:55:15.818151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818157 | orchestrator | 2025-05-17 00:55:15.818163 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-17 00:55:15.818169 | orchestrator | Saturday 17 May 2025 00:52:15 +0000 (0:00:05.243) 0:04:06.388 ********** 2025-05-17 00:55:15.818183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 00:55:15.818207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 00:55:15.818242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 00:55:15.818253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.818332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.818348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 00:55:15.818355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 00:55:15.818374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.818385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.818406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.818419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 00:55:15.818426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.818433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.818465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 00:55:15.818473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 00:55:15.818479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.818496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 00:55:15.818506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 00:55:15.818564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818575 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.818590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 00:55:15.818597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 00:55:15.818616 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.818623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.818646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.818655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 00:55:15.818669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.818690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 00:55:15.818700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 00:55:15.818718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 00:55:15.818724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.818731 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.818737 | orchestrator | 2025-05-17 00:55:15.818743 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-17 00:55:15.818750 | orchestrator | Saturday 17 May 2025 00:52:17 +0000 (0:00:02.027) 0:04:08.416 ********** 2025-05-17 00:55:15.818756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-17 00:55:15.818767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-17 00:55:15.818774 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.818780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-17 00:55:15.818786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-17 00:55:15.818792 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.818799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-17 00:55:15.818805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-17 00:55:15.818811 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.818817 | orchestrator | 2025-05-17 00:55:15.818824 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-17 00:55:15.818833 | orchestrator | Saturday 17 May 2025 00:52:19 +0000 (0:00:01.968) 0:04:10.385 ********** 2025-05-17 00:55:15.818840 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.818846 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.818852 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.818858 | orchestrator | 2025-05-17 00:55:15.818864 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-17 00:55:15.818870 | orchestrator | Saturday 17 May 2025 00:52:20 +0000 (0:00:01.431) 0:04:11.816 ********** 2025-05-17 00:55:15.818877 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.818883 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.818893 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.818899 | orchestrator | 2025-05-17 00:55:15.818905 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-17 00:55:15.818911 | orchestrator | Saturday 17 May 2025 00:52:23 +0000 (0:00:02.250) 0:04:14.067 ********** 2025-05-17 00:55:15.818918 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.818924 | orchestrator | 2025-05-17 00:55:15.818930 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-17 00:55:15.818936 | orchestrator | Saturday 17 May 2025 00:52:24 +0000 (0:00:01.523) 0:04:15.590 ********** 2025-05-17 00:55:15.818942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.818949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.818960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.818967 | orchestrator | 2025-05-17 00:55:15.818973 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-17 00:55:15.818979 | orchestrator | Saturday 17 May 2025 00:52:28 +0000 (0:00:03.702) 0:04:19.293 ********** 2025-05-17 00:55:15.819031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.819039 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.819045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.819052 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.819063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.819070 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.819076 | orchestrator | 2025-05-17 00:55:15.819082 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-17 00:55:15.819088 | orchestrator | Saturday 17 May 2025 00:52:29 +0000 (0:00:00.758) 0:04:20.052 ********** 2025-05-17 00:55:15.819095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819107 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.819114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819126 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.819132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819149 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.819155 | orchestrator | 2025-05-17 00:55:15.819161 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-17 00:55:15.819167 | orchestrator | Saturday 17 May 2025 00:52:30 +0000 (0:00:00.913) 0:04:20.965 ********** 2025-05-17 00:55:15.819173 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.819180 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.819186 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.819192 | orchestrator | 2025-05-17 00:55:15.819198 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-17 00:55:15.819207 | orchestrator | Saturday 17 May 2025 00:52:31 +0000 (0:00:01.413) 0:04:22.379 ********** 2025-05-17 00:55:15.819214 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.819220 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.819226 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.819232 | orchestrator | 2025-05-17 00:55:15.819238 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-17 00:55:15.819244 | orchestrator | Saturday 17 May 2025 00:52:33 +0000 (0:00:02.327) 0:04:24.706 ********** 2025-05-17 00:55:15.819276 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.819283 | orchestrator | 2025-05-17 00:55:15.819289 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-17 00:55:15.819296 | orchestrator | Saturday 17 May 2025 00:52:35 +0000 (0:00:01.604) 0:04:26.311 ********** 2025-05-17 00:55:15.819302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.819316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.819323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.819338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.819351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.819358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.819366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.819373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.819384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.819391 | orchestrator | 2025-05-17 00:55:15.819452 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-17 00:55:15.819459 | orchestrator | Saturday 17 May 2025 00:52:40 +0000 (0:00:05.222) 0:04:31.533 ********** 2025-05-17 00:55:15.819473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.819488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.819494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.819500 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.819510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.819520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.819545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.819551 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.819558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.819564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.819570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.819576 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.819582 | orchestrator | 2025-05-17 00:55:15.819588 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-17 00:55:15.819593 | orchestrator | Saturday 17 May 2025 00:52:41 +0000 (0:00:00.940) 0:04:32.474 ********** 2025-05-17 00:55:15.819603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819633 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.819639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819661 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.819667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-17 00:55:15.819689 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.819695 | orchestrator | 2025-05-17 00:55:15.819700 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-17 00:55:15.819706 | orchestrator | Saturday 17 May 2025 00:52:42 +0000 (0:00:01.282) 0:04:33.757 ********** 2025-05-17 00:55:15.819711 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.819717 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.819722 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.819727 | orchestrator | 2025-05-17 00:55:15.819733 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-17 00:55:15.819738 | orchestrator | Saturday 17 May 2025 00:52:44 +0000 (0:00:01.385) 0:04:35.142 ********** 2025-05-17 00:55:15.819744 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.819749 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.819755 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.819760 | orchestrator | 2025-05-17 00:55:15.819769 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-17 00:55:15.819775 | orchestrator | Saturday 17 May 2025 00:52:46 +0000 (0:00:02.325) 0:04:37.468 ********** 2025-05-17 00:55:15.819780 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.819786 | orchestrator | 2025-05-17 00:55:15.819791 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-17 00:55:15.819797 | orchestrator | Saturday 17 May 2025 00:52:47 +0000 (0:00:01.375) 0:04:38.843 ********** 2025-05-17 00:55:15.819802 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-17 00:55:15.819808 | orchestrator | 2025-05-17 00:55:15.819813 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-17 00:55:15.819819 | orchestrator | Saturday 17 May 2025 00:52:49 +0000 (0:00:01.498) 0:04:40.341 ********** 2025-05-17 00:55:15.819829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-17 00:55:15.819838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-17 00:55:15.819845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-17 00:55:15.819851 | orchestrator | 2025-05-17 00:55:15.819856 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-17 00:55:15.819862 | orchestrator | Saturday 17 May 2025 00:52:54 +0000 (0:00:04.714) 0:04:45.056 ********** 2025-05-17 00:55:15.819868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-17 00:55:15.819873 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.819880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-17 00:55:15.819886 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.819895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-17 00:55:15.819901 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.819906 | orchestrator | 2025-05-17 00:55:15.819912 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-17 00:55:15.819917 | orchestrator | Saturday 17 May 2025 00:52:55 +0000 (0:00:01.515) 0:04:46.571 ********** 2025-05-17 00:55:15.819923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-17 00:55:15.819929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-17 00:55:15.819934 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.819940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-17 00:55:15.819953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-17 00:55:15.819963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-17 00:55:15.819969 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.819974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-17 00:55:15.819980 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.819997 | orchestrator | 2025-05-17 00:55:15.820003 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-17 00:55:15.820008 | orchestrator | Saturday 17 May 2025 00:52:57 +0000 (0:00:01.995) 0:04:48.566 ********** 2025-05-17 00:55:15.820014 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.820019 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.820025 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.820030 | orchestrator | 2025-05-17 00:55:15.820035 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-17 00:55:15.820041 | orchestrator | Saturday 17 May 2025 00:53:00 +0000 (0:00:02.861) 0:04:51.428 ********** 2025-05-17 00:55:15.820046 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.820052 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.820057 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.820062 | orchestrator | 2025-05-17 00:55:15.820068 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-17 00:55:15.820073 | orchestrator | Saturday 17 May 2025 00:53:03 +0000 (0:00:03.373) 0:04:54.801 ********** 2025-05-17 00:55:15.820079 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-17 00:55:15.820103 | orchestrator | 2025-05-17 00:55:15.820112 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-17 00:55:15.820118 | orchestrator | Saturday 17 May 2025 00:53:05 +0000 (0:00:01.198) 0:04:55.999 ********** 2025-05-17 00:55:15.820123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-17 00:55:15.820135 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.820141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-17 00:55:15.820146 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.820152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-17 00:55:15.820158 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.820163 | orchestrator | 2025-05-17 00:55:15.820168 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-17 00:55:15.820174 | orchestrator | Saturday 17 May 2025 00:53:06 +0000 (0:00:01.585) 0:04:57.584 ********** 2025-05-17 00:55:15.820215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-17 00:55:15.820222 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.820230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-17 00:55:15.820236 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.820242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-17 00:55:15.820251 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.820257 | orchestrator | 2025-05-17 00:55:15.820262 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-17 00:55:15.820267 | orchestrator | Saturday 17 May 2025 00:53:08 +0000 (0:00:01.960) 0:04:59.545 ********** 2025-05-17 00:55:15.820273 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.820278 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.820284 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.820289 | orchestrator | 2025-05-17 00:55:15.820294 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-17 00:55:15.820299 | orchestrator | Saturday 17 May 2025 00:53:10 +0000 (0:00:01.706) 0:05:01.251 ********** 2025-05-17 00:55:15.820305 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:55:15.820310 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:55:15.820315 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:55:15.820321 | orchestrator | 2025-05-17 00:55:15.820326 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-17 00:55:15.820331 | orchestrator | Saturday 17 May 2025 00:53:13 +0000 (0:00:03.004) 0:05:04.256 ********** 2025-05-17 00:55:15.820337 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:55:15.820342 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:55:15.820347 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:55:15.820353 | orchestrator | 2025-05-17 00:55:15.820358 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-17 00:55:15.820363 | orchestrator | Saturday 17 May 2025 00:53:16 +0000 (0:00:03.357) 0:05:07.613 ********** 2025-05-17 00:55:15.820369 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-17 00:55:15.820374 | orchestrator | 2025-05-17 00:55:15.820379 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-17 00:55:15.820385 | orchestrator | Saturday 17 May 2025 00:53:17 +0000 (0:00:01.208) 0:05:08.822 ********** 2025-05-17 00:55:15.820390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-17 00:55:15.820396 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.820401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-17 00:55:15.820407 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.820431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-17 00:55:15.820437 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.820446 | orchestrator | 2025-05-17 00:55:15.820452 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-17 00:55:15.820460 | orchestrator | Saturday 17 May 2025 00:53:19 +0000 (0:00:01.371) 0:05:10.193 ********** 2025-05-17 00:55:15.820466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-17 00:55:15.820472 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.820477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-17 00:55:15.820483 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.820488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-17 00:55:15.820494 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.820499 | orchestrator | 2025-05-17 00:55:15.820505 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-17 00:55:15.820510 | orchestrator | Saturday 17 May 2025 00:53:21 +0000 (0:00:01.892) 0:05:12.086 ********** 2025-05-17 00:55:15.820515 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.820521 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.820526 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.820531 | orchestrator | 2025-05-17 00:55:15.820537 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-17 00:55:15.820542 | orchestrator | Saturday 17 May 2025 00:53:22 +0000 (0:00:01.785) 0:05:13.871 ********** 2025-05-17 00:55:15.820547 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:55:15.820553 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:55:15.820558 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:55:15.820564 | orchestrator | 2025-05-17 00:55:15.820569 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-17 00:55:15.820574 | orchestrator | Saturday 17 May 2025 00:53:25 +0000 (0:00:02.512) 0:05:16.384 ********** 2025-05-17 00:55:15.820580 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:55:15.820585 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:55:15.820591 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:55:15.820596 | orchestrator | 2025-05-17 00:55:15.820601 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-17 00:55:15.820607 | orchestrator | Saturday 17 May 2025 00:53:28 +0000 (0:00:03.006) 0:05:19.391 ********** 2025-05-17 00:55:15.820612 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.820617 | orchestrator | 2025-05-17 00:55:15.820623 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-17 00:55:15.820628 | orchestrator | Saturday 17 May 2025 00:53:30 +0000 (0:00:01.738) 0:05:21.129 ********** 2025-05-17 00:55:15.820641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.820651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-17 00:55:15.820657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.820663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.820685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.820700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.820710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-17 00:55:15.820723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.820729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.820736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.820742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.820748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-17 00:55:15.820760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.820770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.820776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.820782 | orchestrator | 2025-05-17 00:55:15.820787 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-17 00:55:15.820793 | orchestrator | Saturday 17 May 2025 00:53:34 +0000 (0:00:04.359) 0:05:25.488 ********** 2025-05-17 00:55:15.820799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.820846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-17 00:55:15.820852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.820871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.820961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.820979 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.821021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.821027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-17 00:55:15.821033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.821039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.821066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.821072 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.821106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.821113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-17 00:55:15.821119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.821124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-17 00:55:15.821135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-17 00:55:15.821140 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.821146 | orchestrator | 2025-05-17 00:55:15.821151 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-17 00:55:15.821156 | orchestrator | Saturday 17 May 2025 00:53:35 +0000 (0:00:01.148) 0:05:26.637 ********** 2025-05-17 00:55:15.821161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-17 00:55:15.821167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-17 00:55:15.821172 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.821177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-17 00:55:15.821192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-17 00:55:15.821198 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.821203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-17 00:55:15.821210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-17 00:55:15.821215 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.821220 | orchestrator | 2025-05-17 00:55:15.821225 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-17 00:55:15.821229 | orchestrator | Saturday 17 May 2025 00:53:36 +0000 (0:00:01.079) 0:05:27.717 ********** 2025-05-17 00:55:15.821234 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.821239 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.821244 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.821248 | orchestrator | 2025-05-17 00:55:15.821253 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-17 00:55:15.821258 | orchestrator | Saturday 17 May 2025 00:53:38 +0000 (0:00:01.453) 0:05:29.171 ********** 2025-05-17 00:55:15.821263 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.821267 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.821272 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.821277 | orchestrator | 2025-05-17 00:55:15.821282 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-17 00:55:15.821286 | orchestrator | Saturday 17 May 2025 00:53:40 +0000 (0:00:02.456) 0:05:31.628 ********** 2025-05-17 00:55:15.821291 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.821296 | orchestrator | 2025-05-17 00:55:15.821300 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-17 00:55:15.821309 | orchestrator | Saturday 17 May 2025 00:53:42 +0000 (0:00:01.527) 0:05:33.155 ********** 2025-05-17 00:55:15.821314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-17 00:55:15.821320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-17 00:55:15.821336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-17 00:55:15.821346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-17 00:55:15.821352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-17 00:55:15.821377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-17 00:55:15.821383 | orchestrator | 2025-05-17 00:55:15.821388 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-17 00:55:15.821393 | orchestrator | Saturday 17 May 2025 00:53:48 +0000 (0:00:06.167) 0:05:39.323 ********** 2025-05-17 00:55:15.821401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-17 00:55:15.821410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-17 00:55:15.821418 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.821424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-17 00:55:15.821429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-17 00:55:15.821434 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.821443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-17 00:55:15.821451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-17 00:55:15.821460 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.821465 | orchestrator | 2025-05-17 00:55:15.821469 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-17 00:55:15.821474 | orchestrator | Saturday 17 May 2025 00:53:49 +0000 (0:00:00.869) 0:05:40.193 ********** 2025-05-17 00:55:15.821479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-17 00:55:15.821484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-17 00:55:15.821489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-17 00:55:15.821494 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.821499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-17 00:55:15.821504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-17 00:55:15.821509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-17 00:55:15.821514 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.821519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-17 00:55:15.821524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-17 00:55:15.821529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-17 00:55:15.821533 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.821538 | orchestrator | 2025-05-17 00:55:15.821543 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-17 00:55:15.821548 | orchestrator | Saturday 17 May 2025 00:53:50 +0000 (0:00:01.253) 0:05:41.446 ********** 2025-05-17 00:55:15.821552 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.821557 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.821562 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.821567 | orchestrator | 2025-05-17 00:55:15.821572 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-17 00:55:15.821576 | orchestrator | Saturday 17 May 2025 00:53:51 +0000 (0:00:00.677) 0:05:42.124 ********** 2025-05-17 00:55:15.821581 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.821586 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.821591 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.821595 | orchestrator | 2025-05-17 00:55:15.821610 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-17 00:55:15.821616 | orchestrator | Saturday 17 May 2025 00:53:52 +0000 (0:00:01.649) 0:05:43.773 ********** 2025-05-17 00:55:15.821624 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.821629 | orchestrator | 2025-05-17 00:55:15.821633 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-17 00:55:15.821638 | orchestrator | Saturday 17 May 2025 00:53:54 +0000 (0:00:01.815) 0:05:45.589 ********** 2025-05-17 00:55:15.821648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-17 00:55:15.821654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-17 00:55:15.821659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 00:55:15.821665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.821670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 00:55:15.821675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.821693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.821701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.821707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 00:55:15.821712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 00:55:15.821717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-17 00:55:15.821722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 00:55:15.821727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.821745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.821754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 00:55:15.821759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-17 00:55:15.821765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-17 00:55:15.821770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 00:55:15.821792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 00:55:15.821797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.821803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.821808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.821813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.821818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 00:55:15.821823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.821841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 00:55:15.821849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.821854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-17 00:55:15.821860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 00:55:15.821865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.821870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.821881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 00:55:15.821889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.821894 | orchestrator | 2025-05-17 00:55:15.821899 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-17 00:55:15.821904 | orchestrator | Saturday 17 May 2025 00:53:59 +0000 (0:00:04.794) 0:05:50.384 ********** 2025-05-17 00:55:15.821909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 00:55:15.821914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 00:55:15.821919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.821924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.821932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 00:55:15.821941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/e2025-05-17 00:55:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:15.821946 | orchestrator | 2025-05-17 00:55:15 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:15.821951 | orchestrator | 2025-05-17 00:55:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:15.821959 | orchestrator | tc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 00:55:15.821965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 00:55:15.821970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 00:55:15.821975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 00:55:15.821994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.822003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.822011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.822035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.822040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 00:55:15.822046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 00:55:15.822051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.822062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 00:55:15.822067 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.822080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 00:55:15.822085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.822091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.822096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 00:55:15.822101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.822109 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.822114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 00:55:15.822122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 00:55:15.822130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.822136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.822141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 00:55:15.822146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 00:55:15.822155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 00:55:15.822160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.822168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.822176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 00:55:15.822182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 00:55:15.822187 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.822191 | orchestrator | 2025-05-17 00:55:15.822196 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-17 00:55:15.822201 | orchestrator | Saturday 17 May 2025 00:54:00 +0000 (0:00:01.408) 0:05:51.793 ********** 2025-05-17 00:55:15.822206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-17 00:55:15.822215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-17 00:55:15.822220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-17 00:55:15.822226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-17 00:55:15.822231 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.822236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-17 00:55:15.822241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-17 00:55:15.822246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-17 00:55:15.822251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-17 00:55:15.822256 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.822261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-17 00:55:15.822269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-17 00:55:15.822277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-17 00:55:15.822282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-17 00:55:15.822287 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.822291 | orchestrator | 2025-05-17 00:55:15.822296 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-17 00:55:15.822301 | orchestrator | Saturday 17 May 2025 00:54:02 +0000 (0:00:01.323) 0:05:53.116 ********** 2025-05-17 00:55:15.822306 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.822311 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.822315 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.822320 | orchestrator | 2025-05-17 00:55:15.822328 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-17 00:55:15.822333 | orchestrator | Saturday 17 May 2025 00:54:03 +0000 (0:00:00.937) 0:05:54.053 ********** 2025-05-17 00:55:15.822338 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.822343 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.822348 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.822352 | orchestrator | 2025-05-17 00:55:15.822357 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-17 00:55:15.822362 | orchestrator | Saturday 17 May 2025 00:54:04 +0000 (0:00:01.689) 0:05:55.743 ********** 2025-05-17 00:55:15.822367 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.822371 | orchestrator | 2025-05-17 00:55:15.822376 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-17 00:55:15.822381 | orchestrator | Saturday 17 May 2025 00:54:06 +0000 (0:00:01.570) 0:05:57.313 ********** 2025-05-17 00:55:15.822386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-17 00:55:15.822391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-17 00:55:15.822402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-17 00:55:15.822411 | orchestrator | 2025-05-17 00:55:15.822416 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-17 00:55:15.822421 | orchestrator | Saturday 17 May 2025 00:54:09 +0000 (0:00:02.761) 0:06:00.074 ********** 2025-05-17 00:55:15.822426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-17 00:55:15.822431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-17 00:55:15.822437 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.822442 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.822447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-17 00:55:15.822452 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.822457 | orchestrator | 2025-05-17 00:55:15.822462 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-17 00:55:15.822470 | orchestrator | Saturday 17 May 2025 00:54:09 +0000 (0:00:00.646) 0:06:00.721 ********** 2025-05-17 00:55:15.822475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-17 00:55:15.822480 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.822485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-17 00:55:15.822493 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.822500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-17 00:55:15.822505 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.822510 | orchestrator | 2025-05-17 00:55:15.822514 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-17 00:55:15.822519 | orchestrator | Saturday 17 May 2025 00:54:10 +0000 (0:00:01.111) 0:06:01.832 ********** 2025-05-17 00:55:15.822524 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.822529 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.822533 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.822538 | orchestrator | 2025-05-17 00:55:15.822543 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-17 00:55:15.822548 | orchestrator | Saturday 17 May 2025 00:54:11 +0000 (0:00:00.721) 0:06:02.554 ********** 2025-05-17 00:55:15.822552 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.822557 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.822562 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.822566 | orchestrator | 2025-05-17 00:55:15.822571 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-17 00:55:15.822576 | orchestrator | Saturday 17 May 2025 00:54:13 +0000 (0:00:01.709) 0:06:04.263 ********** 2025-05-17 00:55:15.822581 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:55:15.822585 | orchestrator | 2025-05-17 00:55:15.822590 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-17 00:55:15.822595 | orchestrator | Saturday 17 May 2025 00:54:15 +0000 (0:00:01.933) 0:06:06.196 ********** 2025-05-17 00:55:15.822599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.822605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.822613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.822625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.822630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.822636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-17 00:55:15.822640 | orchestrator | 2025-05-17 00:55:15.822645 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-17 00:55:15.822650 | orchestrator | Saturday 17 May 2025 00:54:22 +0000 (0:00:07.349) 0:06:13.546 ********** 2025-05-17 00:55:15.822658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.822671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.822676 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.822681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.822686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.822691 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.822696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.822711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-17 00:55:15.822716 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.822721 | orchestrator | 2025-05-17 00:55:15.822726 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-17 00:55:15.822731 | orchestrator | Saturday 17 May 2025 00:54:23 +0000 (0:00:00.891) 0:06:14.437 ********** 2025-05-17 00:55:15.822736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-17 00:55:15.822741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-17 00:55:15.822746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-17 00:55:15.822751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-17 00:55:15.822756 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.822760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-17 00:55:15.822765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-17 00:55:15.822770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-17 00:55:15.822775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-17 00:55:15.822783 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.822788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-17 00:55:15.822793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-17 00:55:15.822797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-17 00:55:15.822802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-17 00:55:15.822807 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.822812 | orchestrator | 2025-05-17 00:55:15.822817 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-17 00:55:15.822822 | orchestrator | Saturday 17 May 2025 00:54:25 +0000 (0:00:01.636) 0:06:16.074 ********** 2025-05-17 00:55:15.822826 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.822834 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.822839 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.822844 | orchestrator | 2025-05-17 00:55:15.822849 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-17 00:55:15.822854 | orchestrator | Saturday 17 May 2025 00:54:26 +0000 (0:00:01.421) 0:06:17.495 ********** 2025-05-17 00:55:15.822858 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.822863 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.822868 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.822873 | orchestrator | 2025-05-17 00:55:15.822877 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-17 00:55:15.822885 | orchestrator | Saturday 17 May 2025 00:54:29 +0000 (0:00:02.434) 0:06:19.929 ********** 2025-05-17 00:55:15.822890 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.822895 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.822899 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.822904 | orchestrator | 2025-05-17 00:55:15.822909 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-17 00:55:15.822914 | orchestrator | Saturday 17 May 2025 00:54:29 +0000 (0:00:00.330) 0:06:20.260 ********** 2025-05-17 00:55:15.822919 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.822923 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.822928 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.822933 | orchestrator | 2025-05-17 00:55:15.822938 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-17 00:55:15.822942 | orchestrator | Saturday 17 May 2025 00:54:29 +0000 (0:00:00.536) 0:06:20.797 ********** 2025-05-17 00:55:15.822947 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.822952 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.822957 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.822961 | orchestrator | 2025-05-17 00:55:15.822966 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-17 00:55:15.822971 | orchestrator | Saturday 17 May 2025 00:54:30 +0000 (0:00:00.583) 0:06:21.380 ********** 2025-05-17 00:55:15.822976 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.822980 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.822995 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.823000 | orchestrator | 2025-05-17 00:55:15.823005 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-17 00:55:15.823010 | orchestrator | Saturday 17 May 2025 00:54:30 +0000 (0:00:00.300) 0:06:21.680 ********** 2025-05-17 00:55:15.823018 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.823023 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.823028 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.823032 | orchestrator | 2025-05-17 00:55:15.823037 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-17 00:55:15.823042 | orchestrator | Saturday 17 May 2025 00:54:31 +0000 (0:00:00.579) 0:06:22.259 ********** 2025-05-17 00:55:15.823047 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.823051 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.823056 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.823061 | orchestrator | 2025-05-17 00:55:15.823066 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-17 00:55:15.823070 | orchestrator | Saturday 17 May 2025 00:54:32 +0000 (0:00:01.015) 0:06:23.274 ********** 2025-05-17 00:55:15.823075 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:55:15.823080 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:55:15.823085 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:55:15.823089 | orchestrator | 2025-05-17 00:55:15.823094 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-17 00:55:15.823099 | orchestrator | Saturday 17 May 2025 00:54:33 +0000 (0:00:00.662) 0:06:23.937 ********** 2025-05-17 00:55:15.823103 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:55:15.823108 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:55:15.823113 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:55:15.823118 | orchestrator | 2025-05-17 00:55:15.823122 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-17 00:55:15.823127 | orchestrator | Saturday 17 May 2025 00:54:33 +0000 (0:00:00.592) 0:06:24.530 ********** 2025-05-17 00:55:15.823132 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:55:15.823137 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:55:15.823141 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:55:15.823146 | orchestrator | 2025-05-17 00:55:15.823151 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-17 00:55:15.823156 | orchestrator | Saturday 17 May 2025 00:54:34 +0000 (0:00:01.207) 0:06:25.738 ********** 2025-05-17 00:55:15.823160 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:55:15.823165 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:55:15.823170 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:55:15.823174 | orchestrator | 2025-05-17 00:55:15.823179 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-17 00:55:15.823184 | orchestrator | Saturday 17 May 2025 00:54:36 +0000 (0:00:01.207) 0:06:26.945 ********** 2025-05-17 00:55:15.823189 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:55:15.823193 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:55:15.823198 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:55:15.823203 | orchestrator | 2025-05-17 00:55:15.823208 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-17 00:55:15.823212 | orchestrator | Saturday 17 May 2025 00:54:36 +0000 (0:00:00.960) 0:06:27.906 ********** 2025-05-17 00:55:15.823217 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.823222 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.823226 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.823231 | orchestrator | 2025-05-17 00:55:15.823236 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-17 00:55:15.823241 | orchestrator | Saturday 17 May 2025 00:54:42 +0000 (0:00:05.244) 0:06:33.151 ********** 2025-05-17 00:55:15.823245 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:55:15.823250 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:55:15.823255 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:55:15.823259 | orchestrator | 2025-05-17 00:55:15.823264 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-17 00:55:15.823269 | orchestrator | Saturday 17 May 2025 00:54:46 +0000 (0:00:04.040) 0:06:37.191 ********** 2025-05-17 00:55:15.823274 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.823278 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.823286 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.823291 | orchestrator | 2025-05-17 00:55:15.823299 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-17 00:55:15.823304 | orchestrator | Saturday 17 May 2025 00:54:57 +0000 (0:00:11.440) 0:06:48.632 ********** 2025-05-17 00:55:15.823309 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:55:15.823314 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:55:15.823319 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:55:15.823324 | orchestrator | 2025-05-17 00:55:15.823328 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-17 00:55:15.823333 | orchestrator | Saturday 17 May 2025 00:54:58 +0000 (0:00:00.747) 0:06:49.379 ********** 2025-05-17 00:55:15.823341 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:55:15.823346 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:55:15.823351 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:55:15.823355 | orchestrator | 2025-05-17 00:55:15.823360 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-17 00:55:15.823365 | orchestrator | Saturday 17 May 2025 00:55:08 +0000 (0:00:10.391) 0:06:59.770 ********** 2025-05-17 00:55:15.823370 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.823374 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.823379 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.823384 | orchestrator | 2025-05-17 00:55:15.823389 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-17 00:55:15.823393 | orchestrator | Saturday 17 May 2025 00:55:09 +0000 (0:00:00.568) 0:07:00.339 ********** 2025-05-17 00:55:15.823398 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.823403 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.823408 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.823412 | orchestrator | 2025-05-17 00:55:15.823417 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-17 00:55:15.823422 | orchestrator | Saturday 17 May 2025 00:55:09 +0000 (0:00:00.333) 0:07:00.673 ********** 2025-05-17 00:55:15.823426 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.823431 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.823436 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.823441 | orchestrator | 2025-05-17 00:55:15.823445 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-17 00:55:15.823450 | orchestrator | Saturday 17 May 2025 00:55:10 +0000 (0:00:00.612) 0:07:01.285 ********** 2025-05-17 00:55:15.823455 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.823460 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.823464 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.823469 | orchestrator | 2025-05-17 00:55:15.823474 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-17 00:55:15.823478 | orchestrator | Saturday 17 May 2025 00:55:10 +0000 (0:00:00.579) 0:07:01.865 ********** 2025-05-17 00:55:15.823483 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.823488 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.823493 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.823497 | orchestrator | 2025-05-17 00:55:15.823502 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-17 00:55:15.823507 | orchestrator | Saturday 17 May 2025 00:55:11 +0000 (0:00:00.611) 0:07:02.477 ********** 2025-05-17 00:55:15.823512 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:55:15.823516 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:55:15.823521 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:55:15.823526 | orchestrator | 2025-05-17 00:55:15.823530 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-17 00:55:15.823535 | orchestrator | Saturday 17 May 2025 00:55:11 +0000 (0:00:00.346) 0:07:02.824 ********** 2025-05-17 00:55:15.823540 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:55:15.823545 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:55:15.823549 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:55:15.823559 | orchestrator | 2025-05-17 00:55:15.823564 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-17 00:55:15.823569 | orchestrator | Saturday 17 May 2025 00:55:13 +0000 (0:00:01.168) 0:07:03.992 ********** 2025-05-17 00:55:15.823574 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:55:15.823578 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:55:15.823583 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:55:15.823588 | orchestrator | 2025-05-17 00:55:15.823593 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:55:15.823598 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-17 00:55:15.823603 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-17 00:55:15.823607 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-17 00:55:15.823612 | orchestrator | 2025-05-17 00:55:15.823617 | orchestrator | 2025-05-17 00:55:15.823622 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 00:55:15.823627 | orchestrator | Saturday 17 May 2025 00:55:14 +0000 (0:00:01.064) 0:07:05.056 ********** 2025-05-17 00:55:15.823631 | orchestrator | =============================================================================== 2025-05-17 00:55:15.823636 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 11.44s 2025-05-17 00:55:15.823641 | orchestrator | loadbalancer : Start backup keepalived container ----------------------- 10.39s 2025-05-17 00:55:15.823645 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 7.91s 2025-05-17 00:55:15.823650 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.35s 2025-05-17 00:55:15.823655 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.17s 2025-05-17 00:55:15.823660 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 6.04s 2025-05-17 00:55:15.823664 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.24s 2025-05-17 00:55:15.823672 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.24s 2025-05-17 00:55:15.823677 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.22s 2025-05-17 00:55:15.823681 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.12s 2025-05-17 00:55:15.823686 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.98s 2025-05-17 00:55:15.823691 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.84s 2025-05-17 00:55:15.823698 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.79s 2025-05-17 00:55:15.823703 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.74s 2025-05-17 00:55:15.823708 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.74s 2025-05-17 00:55:15.823713 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.71s 2025-05-17 00:55:15.823717 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.64s 2025-05-17 00:55:15.823722 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.36s 2025-05-17 00:55:15.823727 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.33s 2025-05-17 00:55:15.823731 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.31s 2025-05-17 00:55:18.848589 | orchestrator | 2025-05-17 00:55:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:18.849515 | orchestrator | 2025-05-17 00:55:18 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:55:18.852978 | orchestrator | 2025-05-17 00:55:18 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:18.856952 | orchestrator | 2025-05-17 00:55:18 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:55:18.857165 | orchestrator | 2025-05-17 00:55:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:21.900560 | orchestrator | 2025-05-17 00:55:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:21.901976 | orchestrator | 2025-05-17 00:55:21 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:55:21.904698 | orchestrator | 2025-05-17 00:55:21 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:21.906640 | orchestrator | 2025-05-17 00:55:21 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:55:21.906721 | orchestrator | 2025-05-17 00:55:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:24.944617 | orchestrator | 2025-05-17 00:55:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:24.944723 | orchestrator | 2025-05-17 00:55:24 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:55:24.945352 | orchestrator | 2025-05-17 00:55:24 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:24.946293 | orchestrator | 2025-05-17 00:55:24 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:55:24.946318 | orchestrator | 2025-05-17 00:55:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:27.983496 | orchestrator | 2025-05-17 00:55:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:27.984645 | orchestrator | 2025-05-17 00:55:27 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:55:27.986149 | orchestrator | 2025-05-17 00:55:27 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:27.987298 | orchestrator | 2025-05-17 00:55:27 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:55:27.987476 | orchestrator | 2025-05-17 00:55:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:31.015366 | orchestrator | 2025-05-17 00:55:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:31.015629 | orchestrator | 2025-05-17 00:55:31 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:55:31.016154 | orchestrator | 2025-05-17 00:55:31 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:31.017925 | orchestrator | 2025-05-17 00:55:31 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:55:31.017953 | orchestrator | 2025-05-17 00:55:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:34.065157 | orchestrator | 2025-05-17 00:55:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:34.066428 | orchestrator | 2025-05-17 00:55:34 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:55:34.069816 | orchestrator | 2025-05-17 00:55:34 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:34.070944 | orchestrator | 2025-05-17 00:55:34 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:55:34.072800 | orchestrator | 2025-05-17 00:55:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:37.107731 | orchestrator | 2025-05-17 00:55:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:37.109824 | orchestrator | 2025-05-17 00:55:37 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:55:37.110633 | orchestrator | 2025-05-17 00:55:37 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:37.111503 | orchestrator | 2025-05-17 00:55:37 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:55:37.111538 | orchestrator | 2025-05-17 00:55:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:40.152887 | orchestrator | 2025-05-17 00:55:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:40.153059 | orchestrator | 2025-05-17 00:55:40 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:55:40.153922 | orchestrator | 2025-05-17 00:55:40 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:40.154222 | orchestrator | 2025-05-17 00:55:40 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:55:40.154245 | orchestrator | 2025-05-17 00:55:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:43.181736 | orchestrator | 2025-05-17 00:55:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:43.183708 | orchestrator | 2025-05-17 00:55:43 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:55:43.185408 | orchestrator | 2025-05-17 00:55:43 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:43.187060 | orchestrator | 2025-05-17 00:55:43 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:55:43.187092 | orchestrator | 2025-05-17 00:55:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:46.217493 | orchestrator | 2025-05-17 00:55:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:46.218335 | orchestrator | 2025-05-17 00:55:46 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:55:46.221094 | orchestrator | 2025-05-17 00:55:46 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:46.223104 | orchestrator | 2025-05-17 00:55:46 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:55:46.223149 | orchestrator | 2025-05-17 00:55:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:49.264520 | orchestrator | 2025-05-17 00:55:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:49.265176 | orchestrator | 2025-05-17 00:55:49 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:55:49.266285 | orchestrator | 2025-05-17 00:55:49 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:49.267882 | orchestrator | 2025-05-17 00:55:49 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:55:49.267902 | orchestrator | 2025-05-17 00:55:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:52.313332 | orchestrator | 2025-05-17 00:55:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:52.315260 | orchestrator | 2025-05-17 00:55:52 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:55:52.317140 | orchestrator | 2025-05-17 00:55:52 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:52.318721 | orchestrator | 2025-05-17 00:55:52 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:55:52.319275 | orchestrator | 2025-05-17 00:55:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:55.369223 | orchestrator | 2025-05-17 00:55:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:55.370890 | orchestrator | 2025-05-17 00:55:55 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:55:55.374324 | orchestrator | 2025-05-17 00:55:55 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:55.374376 | orchestrator | 2025-05-17 00:55:55 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:55:55.374390 | orchestrator | 2025-05-17 00:55:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:55:58.424760 | orchestrator | 2025-05-17 00:55:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:55:58.427173 | orchestrator | 2025-05-17 00:55:58 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:55:58.428862 | orchestrator | 2025-05-17 00:55:58 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:55:58.430183 | orchestrator | 2025-05-17 00:55:58 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:55:58.430371 | orchestrator | 2025-05-17 00:55:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:01.479465 | orchestrator | 2025-05-17 00:56:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:01.481943 | orchestrator | 2025-05-17 00:56:01 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:01.482386 | orchestrator | 2025-05-17 00:56:01 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:01.484031 | orchestrator | 2025-05-17 00:56:01 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:01.484662 | orchestrator | 2025-05-17 00:56:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:04.545875 | orchestrator | 2025-05-17 00:56:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:04.545959 | orchestrator | 2025-05-17 00:56:04 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:04.546400 | orchestrator | 2025-05-17 00:56:04 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:04.548422 | orchestrator | 2025-05-17 00:56:04 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:04.549097 | orchestrator | 2025-05-17 00:56:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:07.600381 | orchestrator | 2025-05-17 00:56:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:07.601552 | orchestrator | 2025-05-17 00:56:07 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:07.604513 | orchestrator | 2025-05-17 00:56:07 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:07.606990 | orchestrator | 2025-05-17 00:56:07 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:07.607087 | orchestrator | 2025-05-17 00:56:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:10.662319 | orchestrator | 2025-05-17 00:56:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:10.663872 | orchestrator | 2025-05-17 00:56:10 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:10.665568 | orchestrator | 2025-05-17 00:56:10 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:10.667392 | orchestrator | 2025-05-17 00:56:10 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:10.667440 | orchestrator | 2025-05-17 00:56:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:13.707319 | orchestrator | 2025-05-17 00:56:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:13.708929 | orchestrator | 2025-05-17 00:56:13 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:13.711861 | orchestrator | 2025-05-17 00:56:13 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:13.713688 | orchestrator | 2025-05-17 00:56:13 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:13.713725 | orchestrator | 2025-05-17 00:56:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:16.772799 | orchestrator | 2025-05-17 00:56:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:16.773340 | orchestrator | 2025-05-17 00:56:16 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:16.775371 | orchestrator | 2025-05-17 00:56:16 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:16.776344 | orchestrator | 2025-05-17 00:56:16 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:16.776369 | orchestrator | 2025-05-17 00:56:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:19.824468 | orchestrator | 2025-05-17 00:56:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:19.825075 | orchestrator | 2025-05-17 00:56:19 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:19.825625 | orchestrator | 2025-05-17 00:56:19 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:19.826471 | orchestrator | 2025-05-17 00:56:19 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:19.828586 | orchestrator | 2025-05-17 00:56:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:22.865336 | orchestrator | 2025-05-17 00:56:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:22.865755 | orchestrator | 2025-05-17 00:56:22 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:22.868928 | orchestrator | 2025-05-17 00:56:22 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:22.870920 | orchestrator | 2025-05-17 00:56:22 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:22.870949 | orchestrator | 2025-05-17 00:56:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:25.912615 | orchestrator | 2025-05-17 00:56:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:25.912721 | orchestrator | 2025-05-17 00:56:25 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:25.913131 | orchestrator | 2025-05-17 00:56:25 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:25.915824 | orchestrator | 2025-05-17 00:56:25 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:25.916249 | orchestrator | 2025-05-17 00:56:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:28.968775 | orchestrator | 2025-05-17 00:56:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:28.971319 | orchestrator | 2025-05-17 00:56:28 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:28.973491 | orchestrator | 2025-05-17 00:56:28 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:28.975010 | orchestrator | 2025-05-17 00:56:28 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:28.975035 | orchestrator | 2025-05-17 00:56:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:32.009603 | orchestrator | 2025-05-17 00:56:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:32.010292 | orchestrator | 2025-05-17 00:56:32 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:32.010495 | orchestrator | 2025-05-17 00:56:32 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:32.011288 | orchestrator | 2025-05-17 00:56:32 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:32.011342 | orchestrator | 2025-05-17 00:56:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:35.059160 | orchestrator | 2025-05-17 00:56:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:35.060188 | orchestrator | 2025-05-17 00:56:35 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:35.061287 | orchestrator | 2025-05-17 00:56:35 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:35.063124 | orchestrator | 2025-05-17 00:56:35 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:35.063155 | orchestrator | 2025-05-17 00:56:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:38.096518 | orchestrator | 2025-05-17 00:56:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:38.096623 | orchestrator | 2025-05-17 00:56:38 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:38.097142 | orchestrator | 2025-05-17 00:56:38 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:38.098148 | orchestrator | 2025-05-17 00:56:38 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:38.098194 | orchestrator | 2025-05-17 00:56:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:41.149674 | orchestrator | 2025-05-17 00:56:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:41.151095 | orchestrator | 2025-05-17 00:56:41 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:41.152612 | orchestrator | 2025-05-17 00:56:41 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:41.154789 | orchestrator | 2025-05-17 00:56:41 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:41.154840 | orchestrator | 2025-05-17 00:56:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:44.207428 | orchestrator | 2025-05-17 00:56:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:44.209221 | orchestrator | 2025-05-17 00:56:44 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:44.211709 | orchestrator | 2025-05-17 00:56:44 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:44.213872 | orchestrator | 2025-05-17 00:56:44 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:44.214165 | orchestrator | 2025-05-17 00:56:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:47.263225 | orchestrator | 2025-05-17 00:56:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:47.263781 | orchestrator | 2025-05-17 00:56:47 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:47.264533 | orchestrator | 2025-05-17 00:56:47 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:47.265486 | orchestrator | 2025-05-17 00:56:47 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:47.265602 | orchestrator | 2025-05-17 00:56:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:50.319068 | orchestrator | 2025-05-17 00:56:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:50.321188 | orchestrator | 2025-05-17 00:56:50 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:50.322486 | orchestrator | 2025-05-17 00:56:50 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:50.324573 | orchestrator | 2025-05-17 00:56:50 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:50.324617 | orchestrator | 2025-05-17 00:56:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:53.369207 | orchestrator | 2025-05-17 00:56:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:53.369564 | orchestrator | 2025-05-17 00:56:53 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:53.373381 | orchestrator | 2025-05-17 00:56:53 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:53.374178 | orchestrator | 2025-05-17 00:56:53 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:53.374366 | orchestrator | 2025-05-17 00:56:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:56.421424 | orchestrator | 2025-05-17 00:56:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:56.422211 | orchestrator | 2025-05-17 00:56:56 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:56.423566 | orchestrator | 2025-05-17 00:56:56 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:56.424815 | orchestrator | 2025-05-17 00:56:56 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:56.425042 | orchestrator | 2025-05-17 00:56:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:56:59.478302 | orchestrator | 2025-05-17 00:56:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:56:59.479994 | orchestrator | 2025-05-17 00:56:59 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:56:59.481790 | orchestrator | 2025-05-17 00:56:59 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:56:59.483558 | orchestrator | 2025-05-17 00:56:59 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:56:59.483595 | orchestrator | 2025-05-17 00:56:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:02.536229 | orchestrator | 2025-05-17 00:57:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:02.537737 | orchestrator | 2025-05-17 00:57:02 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:02.543486 | orchestrator | 2025-05-17 00:57:02 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:02.547314 | orchestrator | 2025-05-17 00:57:02 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:57:02.547694 | orchestrator | 2025-05-17 00:57:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:05.599041 | orchestrator | 2025-05-17 00:57:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:05.600331 | orchestrator | 2025-05-17 00:57:05 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:05.604159 | orchestrator | 2025-05-17 00:57:05 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:05.605766 | orchestrator | 2025-05-17 00:57:05 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:57:05.605795 | orchestrator | 2025-05-17 00:57:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:08.659971 | orchestrator | 2025-05-17 00:57:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:08.660219 | orchestrator | 2025-05-17 00:57:08 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:08.663408 | orchestrator | 2025-05-17 00:57:08 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:08.665242 | orchestrator | 2025-05-17 00:57:08 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:57:08.665262 | orchestrator | 2025-05-17 00:57:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:11.715351 | orchestrator | 2025-05-17 00:57:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:11.716883 | orchestrator | 2025-05-17 00:57:11 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:11.718764 | orchestrator | 2025-05-17 00:57:11 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:11.721861 | orchestrator | 2025-05-17 00:57:11 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:57:11.721896 | orchestrator | 2025-05-17 00:57:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:14.774151 | orchestrator | 2025-05-17 00:57:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:14.775242 | orchestrator | 2025-05-17 00:57:14 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:14.779566 | orchestrator | 2025-05-17 00:57:14 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:14.781633 | orchestrator | 2025-05-17 00:57:14 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:57:14.782357 | orchestrator | 2025-05-17 00:57:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:17.833449 | orchestrator | 2025-05-17 00:57:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:17.835784 | orchestrator | 2025-05-17 00:57:17 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:17.838621 | orchestrator | 2025-05-17 00:57:17 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:17.841177 | orchestrator | 2025-05-17 00:57:17 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:57:17.841517 | orchestrator | 2025-05-17 00:57:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:20.906354 | orchestrator | 2025-05-17 00:57:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:20.907341 | orchestrator | 2025-05-17 00:57:20 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:20.910575 | orchestrator | 2025-05-17 00:57:20 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:20.912214 | orchestrator | 2025-05-17 00:57:20 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:57:20.912699 | orchestrator | 2025-05-17 00:57:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:23.961665 | orchestrator | 2025-05-17 00:57:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:23.962467 | orchestrator | 2025-05-17 00:57:23 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:23.965323 | orchestrator | 2025-05-17 00:57:23 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:23.967381 | orchestrator | 2025-05-17 00:57:23 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:57:23.967686 | orchestrator | 2025-05-17 00:57:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:27.018447 | orchestrator | 2025-05-17 00:57:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:27.018558 | orchestrator | 2025-05-17 00:57:27 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:27.019526 | orchestrator | 2025-05-17 00:57:27 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:27.020630 | orchestrator | 2025-05-17 00:57:27 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state STARTED 2025-05-17 00:57:27.020843 | orchestrator | 2025-05-17 00:57:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:30.057393 | orchestrator | 2025-05-17 00:57:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:30.061000 | orchestrator | 2025-05-17 00:57:30 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:30.062178 | orchestrator | 2025-05-17 00:57:30 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:30.062850 | orchestrator | 2025-05-17 00:57:30 | INFO  | Task 516ace6e-b6de-47ee-8f5a-248a18e2a61a is in state SUCCESS 2025-05-17 00:57:30.063099 | orchestrator | 2025-05-17 00:57:30.065089 | orchestrator | 2025-05-17 00:57:30.065128 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 00:57:30.065141 | orchestrator | 2025-05-17 00:57:30.065153 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 00:57:30.065164 | orchestrator | Saturday 17 May 2025 00:55:17 +0000 (0:00:00.299) 0:00:00.299 ********** 2025-05-17 00:57:30.065176 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:57:30.065190 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:57:30.065276 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:57:30.065291 | orchestrator | 2025-05-17 00:57:30.065303 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 00:57:30.065314 | orchestrator | Saturday 17 May 2025 00:55:18 +0000 (0:00:00.388) 0:00:00.688 ********** 2025-05-17 00:57:30.065344 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-17 00:57:30.065356 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-17 00:57:30.065367 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-17 00:57:30.065378 | orchestrator | 2025-05-17 00:57:30.065389 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-17 00:57:30.065400 | orchestrator | 2025-05-17 00:57:30.065411 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-17 00:57:30.065421 | orchestrator | Saturday 17 May 2025 00:55:18 +0000 (0:00:00.277) 0:00:00.965 ********** 2025-05-17 00:57:30.065457 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:57:30.065469 | orchestrator | 2025-05-17 00:57:30.065479 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-17 00:57:30.065490 | orchestrator | Saturday 17 May 2025 00:55:19 +0000 (0:00:00.711) 0:00:01.677 ********** 2025-05-17 00:57:30.065501 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-17 00:57:30.065511 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-17 00:57:30.065522 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-17 00:57:30.065533 | orchestrator | 2025-05-17 00:57:30.065543 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-17 00:57:30.065554 | orchestrator | Saturday 17 May 2025 00:55:20 +0000 (0:00:00.757) 0:00:02.435 ********** 2025-05-17 00:57:30.065582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-17 00:57:30.065639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-17 00:57:30.065680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-17 00:57:30.065711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-17 00:57:30.065737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-17 00:57:30.065757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-17 00:57:30.065772 | orchestrator | 2025-05-17 00:57:30.065785 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-17 00:57:30.065797 | orchestrator | Saturday 17 May 2025 00:55:21 +0000 (0:00:01.396) 0:00:03.831 ********** 2025-05-17 00:57:30.065810 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:57:30.065823 | orchestrator | 2025-05-17 00:57:30.065835 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-17 00:57:30.065848 | orchestrator | Saturday 17 May 2025 00:55:22 +0000 (0:00:00.742) 0:00:04.574 ********** 2025-05-17 00:57:30.065871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-17 00:57:30.065893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-17 00:57:30.065907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-17 00:57:30.065926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-17 00:57:30.065984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-17 00:57:30.066006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-17 00:57:30.066069 | orchestrator | 2025-05-17 00:57:30.066084 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-17 00:57:30.066095 | orchestrator | Saturday 17 May 2025 00:55:25 +0000 (0:00:02.971) 0:00:07.545 ********** 2025-05-17 00:57:30.066107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-17 00:57:30.066125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-17 00:57:30.066138 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:57:30.066158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-17 00:57:30.066179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-17 00:57:30.066191 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:57:30.066203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-17 00:57:30.066220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-17 00:57:30.066232 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:57:30.066243 | orchestrator | 2025-05-17 00:57:30.066254 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-17 00:57:30.066265 | orchestrator | Saturday 17 May 2025 00:55:26 +0000 (0:00:01.086) 0:00:08.631 ********** 2025-05-17 00:57:30.066296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-17 00:57:30.066317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-17 00:57:30.066329 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:57:30.066340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-17 00:57:30.066357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-17 00:57:30.066369 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:57:30.066386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-17 00:57:30.066405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-17 00:57:30.066416 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:57:30.066428 | orchestrator | 2025-05-17 00:57:30.066439 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-17 00:57:30.066450 | orchestrator | Saturday 17 May 2025 00:55:27 +0000 (0:00:00.923) 0:00:09.555 ********** 2025-05-17 00:57:30.066461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-17 00:57:30.066483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-17 00:57:30.066495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-17 00:57:30.066521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-17 00:57:30.066534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-17 00:57:30.066563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-17 00:57:30.066577 | orchestrator | 2025-05-17 00:57:30.066588 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-17 00:57:30.066599 | orchestrator | Saturday 17 May 2025 00:55:29 +0000 (0:00:02.168) 0:00:11.724 ********** 2025-05-17 00:57:30.066617 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:57:30.066628 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:57:30.066639 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:57:30.066650 | orchestrator | 2025-05-17 00:57:30.066661 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-17 00:57:30.066672 | orchestrator | Saturday 17 May 2025 00:55:33 +0000 (0:00:03.766) 0:00:15.490 ********** 2025-05-17 00:57:30.066683 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:57:30.066693 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:57:30.066704 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:57:30.066715 | orchestrator | 2025-05-17 00:57:30.066725 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-17 00:57:30.066736 | orchestrator | Saturday 17 May 2025 00:55:35 +0000 (0:00:01.998) 0:00:17.488 ********** 2025-05-17 00:57:30.066757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-17 00:57:30.066769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-17 00:57:30.066781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-17 00:57:30.066798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-17 00:57:30.066824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-17 00:57:30.066837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-17 00:57:30.066849 | orchestrator | 2025-05-17 00:57:30.066860 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-17 00:57:30.066871 | orchestrator | Saturday 17 May 2025 00:55:37 +0000 (0:00:02.397) 0:00:19.886 ********** 2025-05-17 00:57:30.066881 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:57:30.066892 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:57:30.066903 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:57:30.066914 | orchestrator | 2025-05-17 00:57:30.066924 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-17 00:57:30.066935 | orchestrator | Saturday 17 May 2025 00:55:37 +0000 (0:00:00.377) 0:00:20.263 ********** 2025-05-17 00:57:30.066972 | orchestrator | 2025-05-17 00:57:30.066992 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-17 00:57:30.067012 | orchestrator | Saturday 17 May 2025 00:55:38 +0000 (0:00:00.259) 0:00:20.523 ********** 2025-05-17 00:57:30.067031 | orchestrator | 2025-05-17 00:57:30.067047 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-17 00:57:30.067058 | orchestrator | Saturday 17 May 2025 00:55:38 +0000 (0:00:00.055) 0:00:20.578 ********** 2025-05-17 00:57:30.067069 | orchestrator | 2025-05-17 00:57:30.067080 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-17 00:57:30.067090 | orchestrator | Saturday 17 May 2025 00:55:38 +0000 (0:00:00.058) 0:00:20.636 ********** 2025-05-17 00:57:30.067109 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:57:30.067119 | orchestrator | 2025-05-17 00:57:30.067130 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-17 00:57:30.067141 | orchestrator | Saturday 17 May 2025 00:55:38 +0000 (0:00:00.181) 0:00:20.817 ********** 2025-05-17 00:57:30.067151 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:57:30.067162 | orchestrator | 2025-05-17 00:57:30.067172 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-17 00:57:30.067183 | orchestrator | Saturday 17 May 2025 00:55:39 +0000 (0:00:00.754) 0:00:21.572 ********** 2025-05-17 00:57:30.067194 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:57:30.067204 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:57:30.067215 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:57:30.067225 | orchestrator | 2025-05-17 00:57:30.067236 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-17 00:57:30.067252 | orchestrator | Saturday 17 May 2025 00:56:16 +0000 (0:00:37.806) 0:00:59.378 ********** 2025-05-17 00:57:30.067263 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:57:30.067274 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:57:30.067285 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:57:30.067295 | orchestrator | 2025-05-17 00:57:30.067306 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-17 00:57:30.067316 | orchestrator | Saturday 17 May 2025 00:57:17 +0000 (0:01:00.384) 0:01:59.763 ********** 2025-05-17 00:57:30.067327 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:57:30.067338 | orchestrator | 2025-05-17 00:57:30.067349 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-17 00:57:30.067359 | orchestrator | Saturday 17 May 2025 00:57:18 +0000 (0:00:00.713) 0:02:00.476 ********** 2025-05-17 00:57:30.067370 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:57:30.067381 | orchestrator | 2025-05-17 00:57:30.067392 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-17 00:57:30.067402 | orchestrator | Saturday 17 May 2025 00:57:20 +0000 (0:00:02.634) 0:02:03.111 ********** 2025-05-17 00:57:30.067413 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:57:30.067424 | orchestrator | 2025-05-17 00:57:30.067435 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-17 00:57:30.067445 | orchestrator | Saturday 17 May 2025 00:57:23 +0000 (0:00:02.479) 0:02:05.590 ********** 2025-05-17 00:57:30.067456 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:57:30.067466 | orchestrator | 2025-05-17 00:57:30.067477 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-17 00:57:30.067488 | orchestrator | Saturday 17 May 2025 00:57:26 +0000 (0:00:03.113) 0:02:08.704 ********** 2025-05-17 00:57:30.067499 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:57:30.067510 | orchestrator | 2025-05-17 00:57:30.067528 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:57:30.067541 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-17 00:57:30.067554 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-17 00:57:30.067565 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-17 00:57:30.067577 | orchestrator | 2025-05-17 00:57:30.067587 | orchestrator | 2025-05-17 00:57:30.067598 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 00:57:30.067609 | orchestrator | Saturday 17 May 2025 00:57:29 +0000 (0:00:02.887) 0:02:11.591 ********** 2025-05-17 00:57:30.067620 | orchestrator | =============================================================================== 2025-05-17 00:57:30.067637 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 60.38s 2025-05-17 00:57:30.067648 | orchestrator | opensearch : Restart opensearch container ------------------------------ 37.81s 2025-05-17 00:57:30.067658 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.77s 2025-05-17 00:57:30.067669 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.11s 2025-05-17 00:57:30.067680 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.97s 2025-05-17 00:57:30.067691 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.89s 2025-05-17 00:57:30.067701 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.63s 2025-05-17 00:57:30.067712 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.48s 2025-05-17 00:57:30.067722 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.40s 2025-05-17 00:57:30.067733 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.17s 2025-05-17 00:57:30.067744 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.00s 2025-05-17 00:57:30.067754 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.40s 2025-05-17 00:57:30.067765 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.09s 2025-05-17 00:57:30.067776 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.92s 2025-05-17 00:57:30.067786 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.76s 2025-05-17 00:57:30.067798 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.76s 2025-05-17 00:57:30.067816 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.74s 2025-05-17 00:57:30.067833 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.71s 2025-05-17 00:57:30.067851 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.71s 2025-05-17 00:57:30.067869 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2025-05-17 00:57:30.067885 | orchestrator | 2025-05-17 00:57:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:33.117536 | orchestrator | 2025-05-17 00:57:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:33.117647 | orchestrator | 2025-05-17 00:57:33 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:33.119202 | orchestrator | 2025-05-17 00:57:33 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:33.119576 | orchestrator | 2025-05-17 00:57:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:36.173058 | orchestrator | 2025-05-17 00:57:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:36.176730 | orchestrator | 2025-05-17 00:57:36 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:36.179522 | orchestrator | 2025-05-17 00:57:36 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:36.179576 | orchestrator | 2025-05-17 00:57:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:39.232681 | orchestrator | 2025-05-17 00:57:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:39.236364 | orchestrator | 2025-05-17 00:57:39 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:39.238488 | orchestrator | 2025-05-17 00:57:39 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:39.238850 | orchestrator | 2025-05-17 00:57:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:42.295642 | orchestrator | 2025-05-17 00:57:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:42.296976 | orchestrator | 2025-05-17 00:57:42 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:42.300645 | orchestrator | 2025-05-17 00:57:42 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:42.300705 | orchestrator | 2025-05-17 00:57:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:45.342591 | orchestrator | 2025-05-17 00:57:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:45.346063 | orchestrator | 2025-05-17 00:57:45 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:45.348985 | orchestrator | 2025-05-17 00:57:45 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:45.349045 | orchestrator | 2025-05-17 00:57:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:48.391483 | orchestrator | 2025-05-17 00:57:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:48.393392 | orchestrator | 2025-05-17 00:57:48 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:48.395480 | orchestrator | 2025-05-17 00:57:48 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:48.395619 | orchestrator | 2025-05-17 00:57:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:51.444400 | orchestrator | 2025-05-17 00:57:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:51.446570 | orchestrator | 2025-05-17 00:57:51 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:51.449266 | orchestrator | 2025-05-17 00:57:51 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:51.449359 | orchestrator | 2025-05-17 00:57:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:54.500848 | orchestrator | 2025-05-17 00:57:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:54.502415 | orchestrator | 2025-05-17 00:57:54 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:54.502846 | orchestrator | 2025-05-17 00:57:54 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:54.502963 | orchestrator | 2025-05-17 00:57:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:57:57.549566 | orchestrator | 2025-05-17 00:57:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:57:57.550813 | orchestrator | 2025-05-17 00:57:57 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:57:57.552667 | orchestrator | 2025-05-17 00:57:57 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:57:57.552708 | orchestrator | 2025-05-17 00:57:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:00.605224 | orchestrator | 2025-05-17 00:58:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:00.606043 | orchestrator | 2025-05-17 00:58:00 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:58:00.607013 | orchestrator | 2025-05-17 00:58:00 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:58:00.607078 | orchestrator | 2025-05-17 00:58:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:03.668309 | orchestrator | 2025-05-17 00:58:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:03.670799 | orchestrator | 2025-05-17 00:58:03 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:58:03.672230 | orchestrator | 2025-05-17 00:58:03 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:58:03.672601 | orchestrator | 2025-05-17 00:58:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:06.725984 | orchestrator | 2025-05-17 00:58:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:06.726223 | orchestrator | 2025-05-17 00:58:06 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:58:06.727887 | orchestrator | 2025-05-17 00:58:06 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:58:06.727912 | orchestrator | 2025-05-17 00:58:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:09.779508 | orchestrator | 2025-05-17 00:58:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:09.780764 | orchestrator | 2025-05-17 00:58:09 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:58:09.782633 | orchestrator | 2025-05-17 00:58:09 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:58:09.782673 | orchestrator | 2025-05-17 00:58:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:12.829778 | orchestrator | 2025-05-17 00:58:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:12.831103 | orchestrator | 2025-05-17 00:58:12 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:58:12.831803 | orchestrator | 2025-05-17 00:58:12 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:58:12.831842 | orchestrator | 2025-05-17 00:58:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:15.881253 | orchestrator | 2025-05-17 00:58:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:15.882353 | orchestrator | 2025-05-17 00:58:15 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:58:15.884246 | orchestrator | 2025-05-17 00:58:15 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:58:15.884347 | orchestrator | 2025-05-17 00:58:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:18.938284 | orchestrator | 2025-05-17 00:58:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:18.938999 | orchestrator | 2025-05-17 00:58:18 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:58:18.940782 | orchestrator | 2025-05-17 00:58:18 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:58:18.940825 | orchestrator | 2025-05-17 00:58:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:21.983121 | orchestrator | 2025-05-17 00:58:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:21.983230 | orchestrator | 2025-05-17 00:58:21 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:58:21.984980 | orchestrator | 2025-05-17 00:58:21 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:58:21.985011 | orchestrator | 2025-05-17 00:58:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:25.034843 | orchestrator | 2025-05-17 00:58:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:25.035980 | orchestrator | 2025-05-17 00:58:25 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:58:25.038298 | orchestrator | 2025-05-17 00:58:25 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:58:25.038343 | orchestrator | 2025-05-17 00:58:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:28.082716 | orchestrator | 2025-05-17 00:58:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:28.084171 | orchestrator | 2025-05-17 00:58:28 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:58:28.085785 | orchestrator | 2025-05-17 00:58:28 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:58:28.085824 | orchestrator | 2025-05-17 00:58:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:31.142102 | orchestrator | 2025-05-17 00:58:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:31.142369 | orchestrator | 2025-05-17 00:58:31 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state STARTED 2025-05-17 00:58:31.143243 | orchestrator | 2025-05-17 00:58:31 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:58:31.143269 | orchestrator | 2025-05-17 00:58:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:34.200497 | orchestrator | 2025-05-17 00:58:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:34.202160 | orchestrator | 2025-05-17 00:58:34 | INFO  | Task 8c8379f1-fa10-4561-a9d9-029180480ed5 is in state SUCCESS 2025-05-17 00:58:34.203544 | orchestrator | 2025-05-17 00:58:34.203585 | orchestrator | 2025-05-17 00:58:34.203599 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-17 00:58:34.203611 | orchestrator | 2025-05-17 00:58:34.203622 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-17 00:58:34.203634 | orchestrator | Saturday 17 May 2025 00:55:17 +0000 (0:00:00.161) 0:00:00.161 ********** 2025-05-17 00:58:34.203645 | orchestrator | ok: [localhost] => { 2025-05-17 00:58:34.203660 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-17 00:58:34.203671 | orchestrator | } 2025-05-17 00:58:34.203683 | orchestrator | 2025-05-17 00:58:34.203694 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-17 00:58:34.203706 | orchestrator | Saturday 17 May 2025 00:55:17 +0000 (0:00:00.049) 0:00:00.211 ********** 2025-05-17 00:58:34.203718 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-17 00:58:34.203730 | orchestrator | ...ignoring 2025-05-17 00:58:34.203778 | orchestrator | 2025-05-17 00:58:34.203816 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-17 00:58:34.203829 | orchestrator | Saturday 17 May 2025 00:55:20 +0000 (0:00:02.555) 0:00:02.766 ********** 2025-05-17 00:58:34.203840 | orchestrator | skipping: [localhost] 2025-05-17 00:58:34.203851 | orchestrator | 2025-05-17 00:58:34.203862 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-17 00:58:34.203873 | orchestrator | Saturday 17 May 2025 00:55:20 +0000 (0:00:00.058) 0:00:02.825 ********** 2025-05-17 00:58:34.203884 | orchestrator | ok: [localhost] 2025-05-17 00:58:34.203895 | orchestrator | 2025-05-17 00:58:34.203905 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 00:58:34.203916 | orchestrator | 2025-05-17 00:58:34.203954 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 00:58:34.203966 | orchestrator | Saturday 17 May 2025 00:55:20 +0000 (0:00:00.176) 0:00:03.001 ********** 2025-05-17 00:58:34.203977 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:34.203988 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:34.203999 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:34.204038 | orchestrator | 2025-05-17 00:58:34.204050 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 00:58:34.204061 | orchestrator | Saturday 17 May 2025 00:55:21 +0000 (0:00:00.398) 0:00:03.400 ********** 2025-05-17 00:58:34.204071 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-17 00:58:34.204082 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-17 00:58:34.204093 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-17 00:58:34.204104 | orchestrator | 2025-05-17 00:58:34.204115 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-17 00:58:34.204127 | orchestrator | 2025-05-17 00:58:34.204141 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-17 00:58:34.204155 | orchestrator | Saturday 17 May 2025 00:55:21 +0000 (0:00:00.384) 0:00:03.784 ********** 2025-05-17 00:58:34.204169 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-17 00:58:34.204181 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-17 00:58:34.204194 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-17 00:58:34.204207 | orchestrator | 2025-05-17 00:58:34.204219 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-17 00:58:34.204231 | orchestrator | Saturday 17 May 2025 00:55:22 +0000 (0:00:00.607) 0:00:04.391 ********** 2025-05-17 00:58:34.204244 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:58:34.204257 | orchestrator | 2025-05-17 00:58:34.204269 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-17 00:58:34.204282 | orchestrator | Saturday 17 May 2025 00:55:22 +0000 (0:00:00.883) 0:00:05.275 ********** 2025-05-17 00:58:34.204348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-17 00:58:34.204369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-17 00:58:34.204404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-17 00:58:34.204419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-17 00:58:34.204435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-17 00:58:34.204454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-17 00:58:34.204467 | orchestrator | 2025-05-17 00:58:34.204481 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-17 00:58:34.204494 | orchestrator | Saturday 17 May 2025 00:55:26 +0000 (0:00:03.897) 0:00:09.172 ********** 2025-05-17 00:58:34.204519 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:34.204532 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:34.204543 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:34.204554 | orchestrator | 2025-05-17 00:58:34.204565 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-17 00:58:34.204575 | orchestrator | Saturday 17 May 2025 00:55:27 +0000 (0:00:00.698) 0:00:09.870 ********** 2025-05-17 00:58:34.204586 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:34.204597 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:34.204608 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:34.204619 | orchestrator | 2025-05-17 00:58:34.204629 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-17 00:58:34.204640 | orchestrator | Saturday 17 May 2025 00:55:28 +0000 (0:00:01.330) 0:00:11.201 ********** 2025-05-17 00:58:34.204679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-17 00:58:34.204701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-17 00:58:34.204720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-17 00:58:34.204741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-17 00:58:34.204761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-17 00:58:34.204773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-17 00:58:34.204784 | orchestrator | 2025-05-17 00:58:34.204795 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-17 00:58:34.204805 | orchestrator | Saturday 17 May 2025 00:55:34 +0000 (0:00:05.680) 0:00:16.881 ********** 2025-05-17 00:58:34.204816 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:34.204827 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:34.204837 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:34.204848 | orchestrator | 2025-05-17 00:58:34.204858 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-17 00:58:34.204869 | orchestrator | Saturday 17 May 2025 00:55:35 +0000 (0:00:01.093) 0:00:17.975 ********** 2025-05-17 00:58:34.204879 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:34.204890 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:34.204900 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:34.204911 | orchestrator | 2025-05-17 00:58:34.204971 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-17 00:58:34.204984 | orchestrator | Saturday 17 May 2025 00:55:44 +0000 (0:00:08.371) 0:00:26.347 ********** 2025-05-17 00:58:34.205009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-17 00:58:34.205029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-17 00:58:34.205048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-17 00:58:34.205075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-17 00:58:34.205088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-17 00:58:34.205099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-17 00:58:34.205111 | orchestrator | 2025-05-17 00:58:34.205122 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-17 00:58:34.205133 | orchestrator | Saturday 17 May 2025 00:55:47 +0000 (0:00:03.722) 0:00:30.069 ********** 2025-05-17 00:58:34.205143 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:34.205154 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:34.205165 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:34.205176 | orchestrator | 2025-05-17 00:58:34.205187 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-17 00:58:34.205198 | orchestrator | Saturday 17 May 2025 00:55:48 +0000 (0:00:01.009) 0:00:31.079 ********** 2025-05-17 00:58:34.205208 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:34.205219 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:34.205230 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:34.205241 | orchestrator | 2025-05-17 00:58:34.205251 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-17 00:58:34.205262 | orchestrator | Saturday 17 May 2025 00:55:49 +0000 (0:00:00.436) 0:00:31.515 ********** 2025-05-17 00:58:34.205273 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:34.205284 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:34.205295 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:34.205305 | orchestrator | 2025-05-17 00:58:34.205316 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-17 00:58:34.205327 | orchestrator | Saturday 17 May 2025 00:55:49 +0000 (0:00:00.294) 0:00:31.810 ********** 2025-05-17 00:58:34.205339 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-17 00:58:34.205351 | orchestrator | ...ignoring 2025-05-17 00:58:34.205369 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-17 00:58:34.205380 | orchestrator | ...ignoring 2025-05-17 00:58:34.205390 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-17 00:58:34.205401 | orchestrator | ...ignoring 2025-05-17 00:58:34.205412 | orchestrator | 2025-05-17 00:58:34.205435 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-17 00:58:34.205446 | orchestrator | Saturday 17 May 2025 00:56:00 +0000 (0:00:10.966) 0:00:42.777 ********** 2025-05-17 00:58:34.205457 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:34.205467 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:34.205478 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:34.205489 | orchestrator | 2025-05-17 00:58:34.205500 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-17 00:58:34.205511 | orchestrator | Saturday 17 May 2025 00:56:01 +0000 (0:00:00.618) 0:00:43.396 ********** 2025-05-17 00:58:34.205521 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:34.205532 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:34.205543 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:34.205554 | orchestrator | 2025-05-17 00:58:34.205565 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-17 00:58:34.205576 | orchestrator | Saturday 17 May 2025 00:56:01 +0000 (0:00:00.569) 0:00:43.966 ********** 2025-05-17 00:58:34.205586 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:34.205597 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:34.205608 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:34.205619 | orchestrator | 2025-05-17 00:58:34.205636 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-17 00:58:34.205647 | orchestrator | Saturday 17 May 2025 00:56:02 +0000 (0:00:00.468) 0:00:44.434 ********** 2025-05-17 00:58:34.205658 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:34.205669 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:34.205680 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:34.205691 | orchestrator | 2025-05-17 00:58:34.205701 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-17 00:58:34.205712 | orchestrator | Saturday 17 May 2025 00:56:02 +0000 (0:00:00.708) 0:00:45.143 ********** 2025-05-17 00:58:34.205723 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:34.205733 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:34.205744 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:34.205754 | orchestrator | 2025-05-17 00:58:34.205765 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-17 00:58:34.205775 | orchestrator | Saturday 17 May 2025 00:56:03 +0000 (0:00:00.663) 0:00:45.807 ********** 2025-05-17 00:58:34.205786 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:34.205797 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:34.205807 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:34.205818 | orchestrator | 2025-05-17 00:58:34.205829 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-17 00:58:34.205839 | orchestrator | Saturday 17 May 2025 00:56:04 +0000 (0:00:00.540) 0:00:46.347 ********** 2025-05-17 00:58:34.205850 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:34.205860 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:34.205871 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-17 00:58:34.205881 | orchestrator | 2025-05-17 00:58:34.205891 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-17 00:58:34.205902 | orchestrator | Saturday 17 May 2025 00:56:04 +0000 (0:00:00.571) 0:00:46.918 ********** 2025-05-17 00:58:34.205913 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:34.205942 | orchestrator | 2025-05-17 00:58:34.205954 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-17 00:58:34.205987 | orchestrator | Saturday 17 May 2025 00:56:14 +0000 (0:00:10.230) 0:00:57.148 ********** 2025-05-17 00:58:34.205998 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:34.206009 | orchestrator | 2025-05-17 00:58:34.206073 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-17 00:58:34.206085 | orchestrator | Saturday 17 May 2025 00:56:14 +0000 (0:00:00.138) 0:00:57.287 ********** 2025-05-17 00:58:34.206095 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:34.206106 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:34.206117 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:34.206128 | orchestrator | 2025-05-17 00:58:34.206138 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-17 00:58:34.206149 | orchestrator | Saturday 17 May 2025 00:56:16 +0000 (0:00:01.008) 0:00:58.295 ********** 2025-05-17 00:58:34.206160 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:34.206170 | orchestrator | 2025-05-17 00:58:34.206181 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-17 00:58:34.206192 | orchestrator | Saturday 17 May 2025 00:56:24 +0000 (0:00:08.400) 0:01:06.695 ********** 2025-05-17 00:58:34.206202 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:34.206213 | orchestrator | 2025-05-17 00:58:34.206224 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-17 00:58:34.206235 | orchestrator | Saturday 17 May 2025 00:56:25 +0000 (0:00:01.517) 0:01:08.213 ********** 2025-05-17 00:58:34.206245 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:34.206256 | orchestrator | 2025-05-17 00:58:34.206267 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-17 00:58:34.206277 | orchestrator | Saturday 17 May 2025 00:56:28 +0000 (0:00:02.658) 0:01:10.871 ********** 2025-05-17 00:58:34.206288 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:34.206299 | orchestrator | 2025-05-17 00:58:34.206310 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-17 00:58:34.206320 | orchestrator | Saturday 17 May 2025 00:56:28 +0000 (0:00:00.109) 0:01:10.980 ********** 2025-05-17 00:58:34.206331 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:34.206342 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:34.206352 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:34.206363 | orchestrator | 2025-05-17 00:58:34.206373 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-17 00:58:34.206384 | orchestrator | Saturday 17 May 2025 00:56:29 +0000 (0:00:00.484) 0:01:11.465 ********** 2025-05-17 00:58:34.206395 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:34.206405 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:34.206416 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:34.206427 | orchestrator | 2025-05-17 00:58:34.206437 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-05-17 00:58:34.206467 | orchestrator | Saturday 17 May 2025 00:56:29 +0000 (0:00:00.466) 0:01:11.932 ********** 2025-05-17 00:58:34.206479 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-17 00:58:34.206490 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:34.206501 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:34.206511 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:34.206522 | orchestrator | 2025-05-17 00:58:34.206533 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-17 00:58:34.206543 | orchestrator | skipping: no hosts matched 2025-05-17 00:58:34.206554 | orchestrator | 2025-05-17 00:58:34.206564 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-17 00:58:34.206575 | orchestrator | 2025-05-17 00:58:34.206586 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-17 00:58:34.206596 | orchestrator | Saturday 17 May 2025 00:56:45 +0000 (0:00:15.613) 0:01:27.545 ********** 2025-05-17 00:58:34.206607 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:34.206617 | orchestrator | 2025-05-17 00:58:34.206635 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-17 00:58:34.206646 | orchestrator | Saturday 17 May 2025 00:57:04 +0000 (0:00:19.151) 0:01:46.697 ********** 2025-05-17 00:58:34.206665 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:34.206677 | orchestrator | 2025-05-17 00:58:34.206687 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-17 00:58:34.206698 | orchestrator | Saturday 17 May 2025 00:57:19 +0000 (0:00:15.556) 0:02:02.253 ********** 2025-05-17 00:58:34.206709 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:34.206720 | orchestrator | 2025-05-17 00:58:34.206730 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-17 00:58:34.206741 | orchestrator | 2025-05-17 00:58:34.206751 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-17 00:58:34.206762 | orchestrator | Saturday 17 May 2025 00:57:22 +0000 (0:00:02.512) 0:02:04.766 ********** 2025-05-17 00:58:34.206773 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:34.206783 | orchestrator | 2025-05-17 00:58:34.206793 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-17 00:58:34.206804 | orchestrator | Saturday 17 May 2025 00:57:37 +0000 (0:00:15.280) 0:02:20.046 ********** 2025-05-17 00:58:34.206815 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:34.206825 | orchestrator | 2025-05-17 00:58:34.206836 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-17 00:58:34.206847 | orchestrator | Saturday 17 May 2025 00:57:58 +0000 (0:00:20.517) 0:02:40.564 ********** 2025-05-17 00:58:34.206857 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:34.206868 | orchestrator | 2025-05-17 00:58:34.206879 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-17 00:58:34.206889 | orchestrator | 2025-05-17 00:58:34.206900 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-17 00:58:34.206910 | orchestrator | Saturday 17 May 2025 00:58:00 +0000 (0:00:02.447) 0:02:43.011 ********** 2025-05-17 00:58:34.206978 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:34.206993 | orchestrator | 2025-05-17 00:58:34.207003 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-17 00:58:34.207015 | orchestrator | Saturday 17 May 2025 00:58:12 +0000 (0:00:12.272) 0:02:55.284 ********** 2025-05-17 00:58:34.207037 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:34.207048 | orchestrator | 2025-05-17 00:58:34.207059 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-17 00:58:34.207070 | orchestrator | Saturday 17 May 2025 00:58:17 +0000 (0:00:04.548) 0:02:59.832 ********** 2025-05-17 00:58:34.207080 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:34.207090 | orchestrator | 2025-05-17 00:58:34.207101 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-17 00:58:34.207112 | orchestrator | 2025-05-17 00:58:34.207123 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-17 00:58:34.207134 | orchestrator | Saturday 17 May 2025 00:58:20 +0000 (0:00:02.471) 0:03:02.304 ********** 2025-05-17 00:58:34.207144 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:58:34.207153 | orchestrator | 2025-05-17 00:58:34.207163 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-17 00:58:34.207172 | orchestrator | Saturday 17 May 2025 00:58:20 +0000 (0:00:00.744) 0:03:03.049 ********** 2025-05-17 00:58:34.207182 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:34.207192 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:34.207202 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:34.207211 | orchestrator | 2025-05-17 00:58:34.207221 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-17 00:58:34.207230 | orchestrator | Saturday 17 May 2025 00:58:23 +0000 (0:00:02.506) 0:03:05.556 ********** 2025-05-17 00:58:34.207239 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:34.207249 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:34.207259 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:34.207288 | orchestrator | 2025-05-17 00:58:34.207299 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-17 00:58:34.207309 | orchestrator | Saturday 17 May 2025 00:58:25 +0000 (0:00:02.214) 0:03:07.771 ********** 2025-05-17 00:58:34.207318 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:34.207328 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:34.207337 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:34.207347 | orchestrator | 2025-05-17 00:58:34.207357 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-17 00:58:34.207366 | orchestrator | Saturday 17 May 2025 00:58:27 +0000 (0:00:02.281) 0:03:10.052 ********** 2025-05-17 00:58:34.207375 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:34.207385 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:34.207394 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:34.207404 | orchestrator | 2025-05-17 00:58:34.207414 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-17 00:58:34.207423 | orchestrator | Saturday 17 May 2025 00:58:29 +0000 (0:00:02.203) 0:03:12.256 ********** 2025-05-17 00:58:34.207432 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:34.207442 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:34.207452 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:34.207461 | orchestrator | 2025-05-17 00:58:34.207476 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-17 00:58:34.207485 | orchestrator | Saturday 17 May 2025 00:58:33 +0000 (0:00:03.356) 0:03:15.612 ********** 2025-05-17 00:58:34.207495 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:34.207504 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:34.207514 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:34.207523 | orchestrator | 2025-05-17 00:58:34.207533 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:58:34.207542 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-17 00:58:34.207553 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-05-17 00:58:34.207571 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-17 00:58:34.207581 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-17 00:58:34.207591 | orchestrator | 2025-05-17 00:58:34.207600 | orchestrator | 2025-05-17 00:58:34.207609 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 00:58:34.207619 | orchestrator | Saturday 17 May 2025 00:58:33 +0000 (0:00:00.363) 0:03:15.976 ********** 2025-05-17 00:58:34.207628 | orchestrator | =============================================================================== 2025-05-17 00:58:34.207638 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.07s 2025-05-17 00:58:34.207648 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 34.43s 2025-05-17 00:58:34.207657 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 15.61s 2025-05-17 00:58:34.207667 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.27s 2025-05-17 00:58:34.207676 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.97s 2025-05-17 00:58:34.207686 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.23s 2025-05-17 00:58:34.207695 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.40s 2025-05-17 00:58:34.207705 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 8.37s 2025-05-17 00:58:34.207714 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.68s 2025-05-17 00:58:34.207729 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.96s 2025-05-17 00:58:34.207739 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.55s 2025-05-17 00:58:34.207749 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.90s 2025-05-17 00:58:34.207758 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.72s 2025-05-17 00:58:34.207768 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.36s 2025-05-17 00:58:34.207777 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.66s 2025-05-17 00:58:34.207786 | orchestrator | Check MariaDB service --------------------------------------------------- 2.56s 2025-05-17 00:58:34.207796 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.51s 2025-05-17 00:58:34.207805 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.47s 2025-05-17 00:58:34.207814 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.28s 2025-05-17 00:58:34.207824 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.21s 2025-05-17 00:58:34.207834 | orchestrator | 2025-05-17 00:58:34 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state STARTED 2025-05-17 00:58:34.207844 | orchestrator | 2025-05-17 00:58:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:37.268531 | orchestrator | 2025-05-17 00:58:37 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:58:37.269672 | orchestrator | 2025-05-17 00:58:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:37.271781 | orchestrator | 2025-05-17 00:58:37 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:58:37.281295 | orchestrator | 2025-05-17 00:58:37.281375 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-17 00:58:37.281399 | orchestrator | 2025-05-17 00:58:37.281419 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-17 00:58:37.281437 | orchestrator | 2025-05-17 00:58:37.281449 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-17 00:58:37.281460 | orchestrator | Saturday 17 May 2025 00:45:49 +0000 (0:00:01.697) 0:00:01.697 ********** 2025-05-17 00:58:37.281473 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.281486 | orchestrator | 2025-05-17 00:58:37.281497 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-17 00:58:37.281508 | orchestrator | Saturday 17 May 2025 00:45:50 +0000 (0:00:01.174) 0:00:02.871 ********** 2025-05-17 00:58:37.281536 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-17 00:58:37.281548 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-17 00:58:37.281559 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-17 00:58:37.281570 | orchestrator | 2025-05-17 00:58:37.281581 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-17 00:58:37.281591 | orchestrator | Saturday 17 May 2025 00:45:51 +0000 (0:00:00.594) 0:00:03.466 ********** 2025-05-17 00:58:37.281603 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.281614 | orchestrator | 2025-05-17 00:58:37.281624 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-17 00:58:37.281635 | orchestrator | Saturday 17 May 2025 00:45:52 +0000 (0:00:01.240) 0:00:04.707 ********** 2025-05-17 00:58:37.281646 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.281659 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.281670 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.281708 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.281720 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.281731 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.281741 | orchestrator | 2025-05-17 00:58:37.281752 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-17 00:58:37.281763 | orchestrator | Saturday 17 May 2025 00:45:53 +0000 (0:00:01.319) 0:00:06.026 ********** 2025-05-17 00:58:37.281773 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.281784 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.281795 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.281812 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.281830 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.281848 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.281867 | orchestrator | 2025-05-17 00:58:37.281888 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-17 00:58:37.281906 | orchestrator | Saturday 17 May 2025 00:45:54 +0000 (0:00:00.944) 0:00:06.971 ********** 2025-05-17 00:58:37.281949 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.281963 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.281975 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.281987 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.282000 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.282011 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.282064 | orchestrator | 2025-05-17 00:58:37.282076 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-17 00:58:37.282087 | orchestrator | Saturday 17 May 2025 00:45:55 +0000 (0:00:01.078) 0:00:08.050 ********** 2025-05-17 00:58:37.282098 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.282109 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.282120 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.282131 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.282141 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.282152 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.282163 | orchestrator | 2025-05-17 00:58:37.282174 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-17 00:58:37.282185 | orchestrator | Saturday 17 May 2025 00:45:56 +0000 (0:00:01.204) 0:00:09.254 ********** 2025-05-17 00:58:37.282196 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.282207 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.282217 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.282228 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.282239 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.282250 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.282261 | orchestrator | 2025-05-17 00:58:37.282272 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-17 00:58:37.282283 | orchestrator | Saturday 17 May 2025 00:45:57 +0000 (0:00:00.827) 0:00:10.082 ********** 2025-05-17 00:58:37.282294 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.282310 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.282330 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.282348 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.282366 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.282386 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.282406 | orchestrator | 2025-05-17 00:58:37.282426 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-17 00:58:37.282452 | orchestrator | Saturday 17 May 2025 00:45:59 +0000 (0:00:01.533) 0:00:11.615 ********** 2025-05-17 00:58:37.282474 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.282493 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.282513 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.282530 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.282541 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.282552 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.282563 | orchestrator | 2025-05-17 00:58:37.282573 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-17 00:58:37.282584 | orchestrator | Saturday 17 May 2025 00:46:00 +0000 (0:00:00.775) 0:00:12.391 ********** 2025-05-17 00:58:37.282608 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.282619 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.282630 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.282640 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.282651 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.282662 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.282672 | orchestrator | 2025-05-17 00:58:37.282698 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-17 00:58:37.282710 | orchestrator | Saturday 17 May 2025 00:46:01 +0000 (0:00:01.138) 0:00:13.529 ********** 2025-05-17 00:58:37.282721 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-17 00:58:37.282732 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-17 00:58:37.282742 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-17 00:58:37.282753 | orchestrator | 2025-05-17 00:58:37.282764 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-17 00:58:37.282774 | orchestrator | Saturday 17 May 2025 00:46:01 +0000 (0:00:00.642) 0:00:14.171 ********** 2025-05-17 00:58:37.282785 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.282795 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.282806 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.282817 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.282836 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.282847 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.282858 | orchestrator | 2025-05-17 00:58:37.282868 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-17 00:58:37.282879 | orchestrator | Saturday 17 May 2025 00:46:03 +0000 (0:00:01.234) 0:00:15.406 ********** 2025-05-17 00:58:37.282889 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-17 00:58:37.282900 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-17 00:58:37.282910 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-17 00:58:37.282973 | orchestrator | 2025-05-17 00:58:37.282987 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-17 00:58:37.282998 | orchestrator | Saturday 17 May 2025 00:46:05 +0000 (0:00:02.816) 0:00:18.223 ********** 2025-05-17 00:58:37.283009 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-17 00:58:37.283020 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-17 00:58:37.283030 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-17 00:58:37.283041 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.283052 | orchestrator | 2025-05-17 00:58:37.283063 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-17 00:58:37.283074 | orchestrator | Saturday 17 May 2025 00:46:06 +0000 (0:00:00.547) 0:00:18.770 ********** 2025-05-17 00:58:37.283086 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-17 00:58:37.283100 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-17 00:58:37.283111 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-17 00:58:37.283122 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.283133 | orchestrator | 2025-05-17 00:58:37.283144 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-17 00:58:37.283201 | orchestrator | Saturday 17 May 2025 00:46:07 +0000 (0:00:00.798) 0:00:19.568 ********** 2025-05-17 00:58:37.283217 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-17 00:58:37.283231 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-17 00:58:37.283242 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-17 00:58:37.283254 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.283265 | orchestrator | 2025-05-17 00:58:37.283275 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-17 00:58:37.283298 | orchestrator | Saturday 17 May 2025 00:46:07 +0000 (0:00:00.238) 0:00:19.807 ********** 2025-05-17 00:58:37.283352 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-17 00:46:03.849495', 'end': '2025-05-17 00:46:04.106484', 'delta': '0:00:00.256989', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-17 00:58:37.283384 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-17 00:46:04.713201', 'end': '2025-05-17 00:46:04.981678', 'delta': '0:00:00.268477', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-17 00:58:37.283405 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-17 00:46:05.517532', 'end': '2025-05-17 00:46:05.803045', 'delta': '0:00:00.285513', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-17 00:58:37.283426 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.283446 | orchestrator | 2025-05-17 00:58:37.283475 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-17 00:58:37.283487 | orchestrator | Saturday 17 May 2025 00:46:07 +0000 (0:00:00.243) 0:00:20.051 ********** 2025-05-17 00:58:37.283498 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.283510 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.283520 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.283531 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.283554 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.283566 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.283577 | orchestrator | 2025-05-17 00:58:37.283588 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-17 00:58:37.283599 | orchestrator | Saturday 17 May 2025 00:46:09 +0000 (0:00:01.704) 0:00:21.755 ********** 2025-05-17 00:58:37.283610 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.283620 | orchestrator | 2025-05-17 00:58:37.283631 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-17 00:58:37.283641 | orchestrator | Saturday 17 May 2025 00:46:10 +0000 (0:00:00.703) 0:00:22.458 ********** 2025-05-17 00:58:37.283652 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.283663 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.283674 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.283684 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.283695 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.283706 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.283717 | orchestrator | 2025-05-17 00:58:37.283728 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-17 00:58:37.283738 | orchestrator | Saturday 17 May 2025 00:46:11 +0000 (0:00:00.868) 0:00:23.327 ********** 2025-05-17 00:58:37.283749 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.283759 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.283770 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.283781 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.283791 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.283802 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.283813 | orchestrator | 2025-05-17 00:58:37.283823 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-17 00:58:37.283834 | orchestrator | Saturday 17 May 2025 00:46:12 +0000 (0:00:01.810) 0:00:25.137 ********** 2025-05-17 00:58:37.283845 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.283856 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.283866 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.283877 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.283888 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.283899 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.283910 | orchestrator | 2025-05-17 00:58:37.283974 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-17 00:58:37.283989 | orchestrator | Saturday 17 May 2025 00:46:13 +0000 (0:00:00.747) 0:00:25.885 ********** 2025-05-17 00:58:37.284009 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.284021 | orchestrator | 2025-05-17 00:58:37.284032 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-17 00:58:37.284043 | orchestrator | Saturday 17 May 2025 00:46:13 +0000 (0:00:00.266) 0:00:26.152 ********** 2025-05-17 00:58:37.284054 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.284065 | orchestrator | 2025-05-17 00:58:37.284076 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-17 00:58:37.284086 | orchestrator | Saturday 17 May 2025 00:46:15 +0000 (0:00:01.352) 0:00:27.504 ********** 2025-05-17 00:58:37.284097 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.284108 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.284118 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.284129 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.284140 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.284164 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.284183 | orchestrator | 2025-05-17 00:58:37.284194 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-17 00:58:37.284211 | orchestrator | Saturday 17 May 2025 00:46:16 +0000 (0:00:01.068) 0:00:28.574 ********** 2025-05-17 00:58:37.284222 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.284234 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.284244 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.284255 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.284266 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.284277 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.284288 | orchestrator | 2025-05-17 00:58:37.284299 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-17 00:58:37.284316 | orchestrator | Saturday 17 May 2025 00:46:17 +0000 (0:00:01.331) 0:00:29.905 ********** 2025-05-17 00:58:37.284335 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.284353 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.284371 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.284388 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.284408 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.284427 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.284445 | orchestrator | 2025-05-17 00:58:37.284456 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-17 00:58:37.284467 | orchestrator | Saturday 17 May 2025 00:46:18 +0000 (0:00:00.642) 0:00:30.547 ********** 2025-05-17 00:58:37.284478 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.284489 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.284500 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.284509 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.284518 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.284528 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.284537 | orchestrator | 2025-05-17 00:58:37.284547 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-17 00:58:37.284556 | orchestrator | Saturday 17 May 2025 00:46:19 +0000 (0:00:00.796) 0:00:31.343 ********** 2025-05-17 00:58:37.284566 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.284576 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.284586 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.284595 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.284605 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.284615 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.284624 | orchestrator | 2025-05-17 00:58:37.284634 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-17 00:58:37.284643 | orchestrator | Saturday 17 May 2025 00:46:19 +0000 (0:00:00.609) 0:00:31.952 ********** 2025-05-17 00:58:37.284653 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.284662 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.284672 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.284681 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.284691 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.284700 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.284709 | orchestrator | 2025-05-17 00:58:37.284719 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-17 00:58:37.284729 | orchestrator | Saturday 17 May 2025 00:46:20 +0000 (0:00:00.830) 0:00:32.783 ********** 2025-05-17 00:58:37.284738 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.284747 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.284757 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.284766 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.284776 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.284786 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.284795 | orchestrator | 2025-05-17 00:58:37.284805 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-17 00:58:37.284823 | orchestrator | Saturday 17 May 2025 00:46:21 +0000 (0:00:00.793) 0:00:33.577 ********** 2025-05-17 00:58:37.284833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.284844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.284862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.284878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.284888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.284898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.284908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.284918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.284969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997', 'scsi-SQEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997-part1', 'scsi-SQEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997-part14', 'scsi-SQEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997-part15', 'scsi-SQEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997-part16', 'scsi-SQEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.284998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-17-00-01-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.285010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285085 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.285101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ebcf83d-a258-4df5-8538-7e1cda047c8b', 'scsi-SQEMU_QEMU_HARDDISK_6ebcf83d-a258-4df5-8538-7e1cda047c8b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ebcf83d-a258-4df5-8538-7e1cda047c8b-part1', 'scsi-SQEMU_QEMU_HARDDISK_6ebcf83d-a258-4df5-8538-7e1cda047c8b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ebcf83d-a258-4df5-8538-7e1cda047c8b-part14', 'scsi-SQEMU_QEMU_HARDDISK_6ebcf83d-a258-4df5-8538-7e1cda047c8b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ebcf83d-a258-4df5-8538-7e1cda047c8b-part15', 'scsi-SQEMU_QEMU_HARDDISK_6ebcf83d-a258-4df5-8538-7e1cda047c8b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ebcf83d-a258-4df5-8538-7e1cda047c8b-part16', 'scsi-SQEMU_QEMU_HARDDISK_6ebcf83d-a258-4df5-8538-7e1cda047c8b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.285141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-17-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.285151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d47211d7-96e0-4a69-a671-7a86a44a64cc', 'scsi-SQEMU_QEMU_HARDDISK_d47211d7-96e0-4a69-a671-7a86a44a64cc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d47211d7-96e0-4a69-a671-7a86a44a64cc-part1', 'scsi-SQEMU_QEMU_HARDDISK_d47211d7-96e0-4a69-a671-7a86a44a64cc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d47211d7-96e0-4a69-a671-7a86a44a64cc-part14', 'scsi-SQEMU_QEMU_HARDDISK_d47211d7-96e0-4a69-a671-7a86a44a64cc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d47211d7-96e0-4a69-a671-7a86a44a64cc-part15', 'scsi-SQEMU_QEMU_HARDDISK_d47211d7-96e0-4a69-a671-7a86a44a64cc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d47211d7-96e0-4a69-a671-7a86a44a64cc-part16', 'scsi-SQEMU_QEMU_HARDDISK_d47211d7-96e0-4a69-a671-7a86a44a64cc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.285289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-17-00-01-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.285302 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.285321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7dd92559--5dfb--56e9--86ff--64c31a268c5e-osd--block--7dd92559--5dfb--56e9--86ff--64c31a268c5e', 'dm-uuid-LVM-ICpBvTgj5dTnFvOdfSeM1M1wzBfOATCHauj4UssTQYCoB2gyofatqm3DvgKoeSc2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--25c991a6--e724--5c1a--b659--154410c60242-osd--block--25c991a6--e724--5c1a--b659--154410c60242', 'dm-uuid-LVM-m2NNGHf8cIhV8Uzscf4kYJllmq0D4CK4ATzekxxRpUCvB6RkiASYM12j2j322eRc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285415 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.285434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c', 'scsi-SQEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c-part1', 'scsi-SQEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c-part14', 'scsi-SQEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c-part15', 'scsi-SQEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c-part16', 'scsi-SQEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.285555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7dd92559--5dfb--56e9--86ff--64c31a268c5e-osd--block--7dd92559--5dfb--56e9--86ff--64c31a268c5e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JohAZ7-Yatq-n8jn-9Jsy-X2fm-6zew-06IpCs', 'scsi-0QEMU_QEMU_HARDDISK_4c541808-fecb-473a-bfa6-e6107b1a17c0', 'scsi-SQEMU_QEMU_HARDDISK_4c541808-fecb-473a-bfa6-e6107b1a17c0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.285605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--25c991a6--e724--5c1a--b659--154410c60242-osd--block--25c991a6--e724--5c1a--b659--154410c60242'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C2G7Nt-gTCt-aKFX-QQcl-CDKe-yNdg-Yx1MWs', 'scsi-0QEMU_QEMU_HARDDISK_0e5716a4-9f06-4595-a8e5-44869be2d3e3', 'scsi-SQEMU_QEMU_HARDDISK_0e5716a4-9f06-4595-a8e5-44869be2d3e3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.285618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6120ef73-2521-4d83-8ac9-34a2289f978b', 'scsi-SQEMU_QEMU_HARDDISK_6120ef73-2521-4d83-8ac9-34a2289f978b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.285635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-17-00-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.285646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93bb0954--6685--5c67--a7e0--a3574f092206-osd--block--93bb0954--6685--5c67--a7e0--a3574f092206', 'dm-uuid-LVM-l2FeIeeIowH2T17wpx4XON9TwpscsTNGQ0MS1sG7tqDgRBCXmVTgFxydy1EGckCJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e21dde7b--e402--5316--8511--fd8df0cc7e38-osd--block--e21dde7b--e402--5316--8511--fd8df0cc7e38', 'dm-uuid-LVM-lsII9KsMdOVEhAs5jK6Hm4bWNIOUR5HdDSBLzkrclZplpraImsUWrgTbKIyO0WGQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.285841 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.285878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee', 'scsi-SQEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee-part1', 'scsi-SQEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee-part14', 'scsi-SQEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee-part15', 'scsi-SQEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee-part16', 'scsi-SQEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.285892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--93bb0954--6685--5c67--a7e0--a3574f092206-osd--block--93bb0954--6685--5c67--a7e0--a3574f092206'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-p0RudH-CC8C-EJgG-w2Ga-LCgf-BcId-roxDFR', 'scsi-0QEMU_QEMU_HARDDISK_6fc6848d-5127-4f65-b412-e829995e25e7', 'scsi-SQEMU_QEMU_HARDDISK_6fc6848d-5127-4f65-b412-e829995e25e7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.285940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e21dde7b--e402--5316--8511--fd8df0cc7e38-osd--block--e21dde7b--e402--5316--8511--fd8df0cc7e38'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dLQXNw-Fjla-neAB-ksBp-dcxc-Nukq-DBHkj4', 'scsi-0QEMU_QEMU_HARDDISK_e3068b10-d912-449c-8868-8ffe0bc578f0', 'scsi-SQEMU_QEMU_HARDDISK_e3068b10-d912-449c-8868-8ffe0bc578f0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.285961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bec56d32-b1fb-48c0-a20f-a6daa2f9686d', 'scsi-SQEMU_QEMU_HARDDISK_bec56d32-b1fb-48c0-a20f-a6daa2f9686d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.285977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-17-00-02-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.285993 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.286089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a83a275b--240b--53eb--892d--9c3e23ab252d-osd--block--a83a275b--240b--53eb--892d--9c3e23ab252d', 'dm-uuid-LVM-SL0sNJNvgH2gWOvAMJwT3AsxRffsOCqYxObuOld4qGl6gElqmBsvB6aFehhG1EIH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.286106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b4d5f2e3--0e32--57e8--8b55--58d04db15593-osd--block--b4d5f2e3--0e32--57e8--8b55--58d04db15593', 'dm-uuid-LVM-OGMCMSTb5TfUSHUlwndgAQn048uH9N59BO7PfFYYUbEkwMCqDgYXeKdwXHp0Xvdy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.286122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.286145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.286155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.286166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.286176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.286185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.286196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.286206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 00:58:37.286231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50', 'scsi-SQEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50-part1', 'scsi-SQEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50-part14', 'scsi-SQEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50-part15', 'scsi-SQEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50-part16', 'scsi-SQEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.286249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a83a275b--240b--53eb--892d--9c3e23ab252d-osd--block--a83a275b--240b--53eb--892d--9c3e23ab252d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nWZ2GN-50zj-EzOk-p08k-P19H-WgDK-kK662v', 'scsi-0QEMU_QEMU_HARDDISK_4ddb2821-e209-41e3-b031-9f23c5adf4cf', 'scsi-SQEMU_QEMU_HARDDISK_4ddb2821-e209-41e3-b031-9f23c5adf4cf'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.286260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b4d5f2e3--0e32--57e8--8b55--58d04db15593-osd--block--b4d5f2e3--0e32--57e8--8b55--58d04db15593'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gH6WB7-T494-Hq6L-G3g0-c5yg-j2Ve-S1f2Hy', 'scsi-0QEMU_QEMU_HARDDISK_8746963d-35d6-4275-a53f-fa471798b09a', 'scsi-SQEMU_QEMU_HARDDISK_8746963d-35d6-4275-a53f-fa471798b09a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.286277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9243530-1d89-4c38-b4ef-a9d7ed453cca', 'scsi-SQEMU_QEMU_HARDDISK_c9243530-1d89-4c38-b4ef-a9d7ed453cca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.286289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM2025-05-17 00:58:37 | INFO  | Task 623588a7-4543-4207-acc9-075a6222e036 is in state SUCCESS 2025-05-17 00:58:37.286308 | orchestrator | _QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-17-00-02-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 00:58:37.286335 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.286352 | orchestrator | 2025-05-17 00:58:37.286368 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-17 00:58:37.286386 | orchestrator | Saturday 17 May 2025 00:46:22 +0000 (0:00:01.517) 0:00:35.094 ********** 2025-05-17 00:58:37.286404 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.286414 | orchestrator | 2025-05-17 00:58:37.286423 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-17 00:58:37.286433 | orchestrator | Saturday 17 May 2025 00:46:23 +0000 (0:00:00.315) 0:00:35.410 ********** 2025-05-17 00:58:37.286442 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.286452 | orchestrator | 2025-05-17 00:58:37.286462 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-17 00:58:37.286471 | orchestrator | Saturday 17 May 2025 00:46:23 +0000 (0:00:00.180) 0:00:35.590 ********** 2025-05-17 00:58:37.286481 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.286491 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.286501 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.286510 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.286520 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.286529 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.286539 | orchestrator | 2025-05-17 00:58:37.286548 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-17 00:58:37.286558 | orchestrator | Saturday 17 May 2025 00:46:24 +0000 (0:00:00.998) 0:00:36.588 ********** 2025-05-17 00:58:37.286567 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.286577 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.286587 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.286597 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.286606 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.286616 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.286625 | orchestrator | 2025-05-17 00:58:37.286635 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-17 00:58:37.286644 | orchestrator | Saturday 17 May 2025 00:46:25 +0000 (0:00:01.374) 0:00:37.963 ********** 2025-05-17 00:58:37.286654 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.286664 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.286673 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.286683 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.286693 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.286702 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.286712 | orchestrator | 2025-05-17 00:58:37.286721 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-17 00:58:37.286730 | orchestrator | Saturday 17 May 2025 00:46:26 +0000 (0:00:00.759) 0:00:38.723 ********** 2025-05-17 00:58:37.286740 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.286750 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.286760 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.286769 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.286778 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.286788 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.286797 | orchestrator | 2025-05-17 00:58:37.286807 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-17 00:58:37.286816 | orchestrator | Saturday 17 May 2025 00:46:27 +0000 (0:00:01.047) 0:00:39.770 ********** 2025-05-17 00:58:37.286826 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.286835 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.286845 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.286854 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.286864 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.286874 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.286890 | orchestrator | 2025-05-17 00:58:37.286900 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-17 00:58:37.286909 | orchestrator | Saturday 17 May 2025 00:46:28 +0000 (0:00:00.773) 0:00:40.544 ********** 2025-05-17 00:58:37.286981 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.286996 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.287006 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.287016 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.287025 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.287035 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.287044 | orchestrator | 2025-05-17 00:58:37.287054 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-17 00:58:37.287064 | orchestrator | Saturday 17 May 2025 00:46:29 +0000 (0:00:01.120) 0:00:41.665 ********** 2025-05-17 00:58:37.287073 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.287083 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.287092 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.287102 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.287111 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.287121 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.287131 | orchestrator | 2025-05-17 00:58:37.287140 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-17 00:58:37.287156 | orchestrator | Saturday 17 May 2025 00:46:30 +0000 (0:00:00.912) 0:00:42.578 ********** 2025-05-17 00:58:37.287167 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-17 00:58:37.287176 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-17 00:58:37.287185 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-17 00:58:37.287195 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-17 00:58:37.287204 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-17 00:58:37.287213 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-17 00:58:37.287223 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.287233 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-17 00:58:37.287242 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.287252 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-17 00:58:37.287261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-17 00:58:37.287277 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-17 00:58:37.287287 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.287296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-17 00:58:37.287311 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-17 00:58:37.287327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-17 00:58:37.287343 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-17 00:58:37.287372 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-17 00:58:37.287384 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.287394 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-17 00:58:37.287403 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.287413 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-17 00:58:37.287423 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-17 00:58:37.287432 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.287442 | orchestrator | 2025-05-17 00:58:37.287451 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-17 00:58:37.287461 | orchestrator | Saturday 17 May 2025 00:46:33 +0000 (0:00:02.809) 0:00:45.388 ********** 2025-05-17 00:58:37.287469 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-17 00:58:37.287477 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-17 00:58:37.287485 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-17 00:58:37.287518 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-17 00:58:37.287527 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-17 00:58:37.287534 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-17 00:58:37.287542 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.287550 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-17 00:58:37.287558 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-17 00:58:37.287565 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-17 00:58:37.287573 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-17 00:58:37.287581 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-17 00:58:37.287589 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.287597 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.287605 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-17 00:58:37.287612 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-17 00:58:37.287620 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.287628 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-17 00:58:37.287636 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-17 00:58:37.287644 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-17 00:58:37.287651 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.287659 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-17 00:58:37.287667 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-17 00:58:37.287675 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.287682 | orchestrator | 2025-05-17 00:58:37.287690 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-17 00:58:37.287698 | orchestrator | Saturday 17 May 2025 00:46:35 +0000 (0:00:02.560) 0:00:47.948 ********** 2025-05-17 00:58:37.287706 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-17 00:58:37.287714 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-17 00:58:37.287721 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-17 00:58:37.287729 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-17 00:58:37.287737 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-17 00:58:37.287745 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-17 00:58:37.287752 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-17 00:58:37.287760 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-17 00:58:37.287767 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-17 00:58:37.287775 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-17 00:58:37.287783 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-17 00:58:37.287790 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-17 00:58:37.287798 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-17 00:58:37.287806 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-17 00:58:37.287814 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-17 00:58:37.287822 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-17 00:58:37.287830 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-17 00:58:37.287844 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-17 00:58:37.287852 | orchestrator | 2025-05-17 00:58:37.287860 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-17 00:58:37.287868 | orchestrator | Saturday 17 May 2025 00:46:41 +0000 (0:00:06.077) 0:00:54.025 ********** 2025-05-17 00:58:37.287876 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-17 00:58:37.287884 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-17 00:58:37.287897 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-17 00:58:37.287905 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-17 00:58:37.287913 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-17 00:58:37.287940 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-17 00:58:37.287949 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.287957 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-17 00:58:37.287979 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-17 00:58:37.287988 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-17 00:58:37.287996 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.288004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-17 00:58:37.288012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-17 00:58:37.288020 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-17 00:58:37.288028 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.288036 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-17 00:58:37.288044 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-17 00:58:37.288051 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.288059 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-17 00:58:37.288067 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.288075 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-17 00:58:37.288082 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-17 00:58:37.288090 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-17 00:58:37.288098 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.288106 | orchestrator | 2025-05-17 00:58:37.288114 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-17 00:58:37.288121 | orchestrator | Saturday 17 May 2025 00:46:43 +0000 (0:00:01.472) 0:00:55.497 ********** 2025-05-17 00:58:37.288129 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-17 00:58:37.288137 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-17 00:58:37.288145 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-17 00:58:37.288153 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.288161 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-17 00:58:37.288168 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-17 00:58:37.288176 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-17 00:58:37.288184 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-17 00:58:37.288192 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-17 00:58:37.288199 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-17 00:58:37.288207 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.288214 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.288222 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-17 00:58:37.288230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-17 00:58:37.288238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-17 00:58:37.288245 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.288253 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-17 00:58:37.288261 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-17 00:58:37.288269 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-17 00:58:37.288276 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-17 00:58:37.288287 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-17 00:58:37.288301 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.288315 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-17 00:58:37.288336 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.288350 | orchestrator | 2025-05-17 00:58:37.288364 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-17 00:58:37.288378 | orchestrator | Saturday 17 May 2025 00:46:44 +0000 (0:00:01.082) 0:00:56.580 ********** 2025-05-17 00:58:37.288391 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-17 00:58:37.288403 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-17 00:58:37.288412 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-17 00:58:37.288420 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-17 00:58:37.288428 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-05-17 00:58:37.288436 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-17 00:58:37.288443 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-17 00:58:37.288451 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-17 00:58:37.288466 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-05-17 00:58:37.288475 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-17 00:58:37.288483 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-17 00:58:37.288491 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-17 00:58:37.288498 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-17 00:58:37.288506 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-17 00:58:37.288514 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.288526 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-17 00:58:37.288534 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-17 00:58:37.288542 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.288550 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-17 00:58:37.288558 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-17 00:58:37.288566 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.288574 | orchestrator | 2025-05-17 00:58:37.288582 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-17 00:58:37.288590 | orchestrator | Saturday 17 May 2025 00:46:45 +0000 (0:00:01.248) 0:00:57.828 ********** 2025-05-17 00:58:37.288598 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.288606 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.288614 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.288622 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.288630 | orchestrator | 2025-05-17 00:58:37.288638 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-17 00:58:37.288646 | orchestrator | Saturday 17 May 2025 00:46:46 +0000 (0:00:01.321) 0:00:59.150 ********** 2025-05-17 00:58:37.288654 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.288662 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.288670 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.288678 | orchestrator | 2025-05-17 00:58:37.288686 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-17 00:58:37.288699 | orchestrator | Saturday 17 May 2025 00:46:47 +0000 (0:00:00.924) 0:01:00.075 ********** 2025-05-17 00:58:37.288707 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.288715 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.288723 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.288731 | orchestrator | 2025-05-17 00:58:37.288739 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-17 00:58:37.288746 | orchestrator | Saturday 17 May 2025 00:46:48 +0000 (0:00:00.908) 0:01:00.984 ********** 2025-05-17 00:58:37.288754 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.288762 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.288769 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.288777 | orchestrator | 2025-05-17 00:58:37.288785 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-17 00:58:37.288793 | orchestrator | Saturday 17 May 2025 00:46:49 +0000 (0:00:00.794) 0:01:01.778 ********** 2025-05-17 00:58:37.288801 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.288809 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.288817 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.288825 | orchestrator | 2025-05-17 00:58:37.288832 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-17 00:58:37.288840 | orchestrator | Saturday 17 May 2025 00:46:50 +0000 (0:00:01.020) 0:01:02.799 ********** 2025-05-17 00:58:37.288848 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.288856 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.288864 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.288872 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.288880 | orchestrator | 2025-05-17 00:58:37.288888 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-17 00:58:37.288895 | orchestrator | Saturday 17 May 2025 00:46:51 +0000 (0:00:00.892) 0:01:03.692 ********** 2025-05-17 00:58:37.288903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.288911 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.288936 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.288949 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.288958 | orchestrator | 2025-05-17 00:58:37.288965 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-17 00:58:37.288973 | orchestrator | Saturday 17 May 2025 00:46:52 +0000 (0:00:00.661) 0:01:04.354 ********** 2025-05-17 00:58:37.288981 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.288989 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.288997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.289005 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.289013 | orchestrator | 2025-05-17 00:58:37.289021 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.289029 | orchestrator | Saturday 17 May 2025 00:46:53 +0000 (0:00:01.412) 0:01:05.766 ********** 2025-05-17 00:58:37.289037 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.289045 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.289053 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.289061 | orchestrator | 2025-05-17 00:58:37.289074 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-17 00:58:37.289082 | orchestrator | Saturday 17 May 2025 00:46:54 +0000 (0:00:00.732) 0:01:06.499 ********** 2025-05-17 00:58:37.289090 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-17 00:58:37.289098 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-17 00:58:37.289106 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-17 00:58:37.289114 | orchestrator | 2025-05-17 00:58:37.289122 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-17 00:58:37.289133 | orchestrator | Saturday 17 May 2025 00:46:55 +0000 (0:00:01.439) 0:01:07.938 ********** 2025-05-17 00:58:37.289150 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.289158 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.289166 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.289174 | orchestrator | 2025-05-17 00:58:37.289182 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.289194 | orchestrator | Saturday 17 May 2025 00:46:56 +0000 (0:00:00.658) 0:01:08.597 ********** 2025-05-17 00:58:37.289202 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.289210 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.289218 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.289226 | orchestrator | 2025-05-17 00:58:37.289233 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-17 00:58:37.289241 | orchestrator | Saturday 17 May 2025 00:46:57 +0000 (0:00:00.693) 0:01:09.290 ********** 2025-05-17 00:58:37.289249 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-17 00:58:37.289257 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.289265 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-17 00:58:37.289273 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.289281 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-17 00:58:37.289288 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.289306 | orchestrator | 2025-05-17 00:58:37.289320 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-17 00:58:37.289333 | orchestrator | Saturday 17 May 2025 00:46:57 +0000 (0:00:00.634) 0:01:09.925 ********** 2025-05-17 00:58:37.289346 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.289361 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.289374 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.289388 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.289397 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.289405 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.289413 | orchestrator | 2025-05-17 00:58:37.289421 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-17 00:58:37.289429 | orchestrator | Saturday 17 May 2025 00:46:58 +0000 (0:00:00.744) 0:01:10.669 ********** 2025-05-17 00:58:37.289437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.289445 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.289452 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-17 00:58:37.289460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.289468 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.289476 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-17 00:58:37.289484 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-17 00:58:37.289491 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-17 00:58:37.289499 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-17 00:58:37.289507 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.289515 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-17 00:58:37.289522 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.289530 | orchestrator | 2025-05-17 00:58:37.289538 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-17 00:58:37.289545 | orchestrator | Saturday 17 May 2025 00:46:59 +0000 (0:00:00.906) 0:01:11.576 ********** 2025-05-17 00:58:37.289553 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.289561 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.289569 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.289587 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.289595 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.289603 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.289610 | orchestrator | 2025-05-17 00:58:37.289618 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-17 00:58:37.289626 | orchestrator | Saturday 17 May 2025 00:46:59 +0000 (0:00:00.569) 0:01:12.146 ********** 2025-05-17 00:58:37.289634 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-17 00:58:37.289642 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-17 00:58:37.289650 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-17 00:58:37.289658 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-17 00:58:37.289666 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-17 00:58:37.289674 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-17 00:58:37.289681 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-17 00:58:37.289689 | orchestrator | 2025-05-17 00:58:37.289697 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-17 00:58:37.289711 | orchestrator | Saturday 17 May 2025 00:47:00 +0000 (0:00:00.900) 0:01:13.046 ********** 2025-05-17 00:58:37.289719 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-17 00:58:37.289728 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-17 00:58:37.289735 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-17 00:58:37.289743 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-17 00:58:37.289751 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-17 00:58:37.289758 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-17 00:58:37.289766 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-17 00:58:37.289774 | orchestrator | 2025-05-17 00:58:37.289786 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-17 00:58:37.289794 | orchestrator | Saturday 17 May 2025 00:47:02 +0000 (0:00:02.049) 0:01:15.096 ********** 2025-05-17 00:58:37.289802 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.289811 | orchestrator | 2025-05-17 00:58:37.289819 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-17 00:58:37.289827 | orchestrator | Saturday 17 May 2025 00:47:04 +0000 (0:00:01.273) 0:01:16.370 ********** 2025-05-17 00:58:37.289835 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.289842 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.289850 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.289858 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.289866 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.289874 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.289882 | orchestrator | 2025-05-17 00:58:37.289890 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-17 00:58:37.289897 | orchestrator | Saturday 17 May 2025 00:47:04 +0000 (0:00:00.786) 0:01:17.156 ********** 2025-05-17 00:58:37.289905 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.289913 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.289961 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.289971 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.289980 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.289988 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.289996 | orchestrator | 2025-05-17 00:58:37.290010 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-17 00:58:37.290045 | orchestrator | Saturday 17 May 2025 00:47:06 +0000 (0:00:01.182) 0:01:18.339 ********** 2025-05-17 00:58:37.290054 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.290063 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.290070 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.290078 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.290086 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.290094 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.290102 | orchestrator | 2025-05-17 00:58:37.290110 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-17 00:58:37.290118 | orchestrator | Saturday 17 May 2025 00:47:07 +0000 (0:00:01.388) 0:01:19.727 ********** 2025-05-17 00:58:37.290126 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.290134 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.290142 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.290150 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.290158 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.290165 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.290173 | orchestrator | 2025-05-17 00:58:37.290181 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-17 00:58:37.290189 | orchestrator | Saturday 17 May 2025 00:47:08 +0000 (0:00:01.351) 0:01:21.078 ********** 2025-05-17 00:58:37.290196 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.290203 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.290209 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.290216 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.290223 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.290229 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.290236 | orchestrator | 2025-05-17 00:58:37.290243 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-17 00:58:37.290249 | orchestrator | Saturday 17 May 2025 00:47:09 +0000 (0:00:01.141) 0:01:22.220 ********** 2025-05-17 00:58:37.290256 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.290262 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.290269 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.290275 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.290282 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.290288 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.290295 | orchestrator | 2025-05-17 00:58:37.290303 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-17 00:58:37.290314 | orchestrator | Saturday 17 May 2025 00:47:10 +0000 (0:00:00.715) 0:01:22.936 ********** 2025-05-17 00:58:37.290325 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.290336 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.290348 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.290359 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.290370 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.290382 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.290392 | orchestrator | 2025-05-17 00:58:37.290399 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-17 00:58:37.290406 | orchestrator | Saturday 17 May 2025 00:47:11 +0000 (0:00:01.072) 0:01:24.008 ********** 2025-05-17 00:58:37.290412 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.290419 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.290425 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.290432 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.290439 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.290445 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.290452 | orchestrator | 2025-05-17 00:58:37.290473 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-17 00:58:37.290480 | orchestrator | Saturday 17 May 2025 00:47:12 +0000 (0:00:00.581) 0:01:24.590 ********** 2025-05-17 00:58:37.290487 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.290500 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.290507 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.290514 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.290520 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.290527 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.290533 | orchestrator | 2025-05-17 00:58:37.290540 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-17 00:58:37.290547 | orchestrator | Saturday 17 May 2025 00:47:13 +0000 (0:00:00.779) 0:01:25.369 ********** 2025-05-17 00:58:37.290553 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.290560 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.290567 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.290573 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.290580 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.290590 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.290597 | orchestrator | 2025-05-17 00:58:37.290604 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-17 00:58:37.290610 | orchestrator | Saturday 17 May 2025 00:47:13 +0000 (0:00:00.632) 0:01:26.002 ********** 2025-05-17 00:58:37.290617 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.290624 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.290630 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.290637 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.290644 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.290650 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.290657 | orchestrator | 2025-05-17 00:58:37.290663 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-17 00:58:37.290670 | orchestrator | Saturday 17 May 2025 00:47:15 +0000 (0:00:01.344) 0:01:27.346 ********** 2025-05-17 00:58:37.290677 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.290683 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.290690 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.290697 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.290703 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.290710 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.290716 | orchestrator | 2025-05-17 00:58:37.290723 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-17 00:58:37.290729 | orchestrator | Saturday 17 May 2025 00:47:15 +0000 (0:00:00.729) 0:01:28.075 ********** 2025-05-17 00:58:37.290736 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.290743 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.290749 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.290756 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.290762 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.290769 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.290776 | orchestrator | 2025-05-17 00:58:37.290782 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-17 00:58:37.290789 | orchestrator | Saturday 17 May 2025 00:47:16 +0000 (0:00:00.865) 0:01:28.940 ********** 2025-05-17 00:58:37.290796 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.290802 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.290809 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.290816 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.290822 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.290829 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.290836 | orchestrator | 2025-05-17 00:58:37.290842 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-17 00:58:37.290849 | orchestrator | Saturday 17 May 2025 00:47:17 +0000 (0:00:00.616) 0:01:29.557 ********** 2025-05-17 00:58:37.290855 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.290862 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.290869 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.290875 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.290887 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.290894 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.290900 | orchestrator | 2025-05-17 00:58:37.290907 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-17 00:58:37.290914 | orchestrator | Saturday 17 May 2025 00:47:18 +0000 (0:00:00.912) 0:01:30.470 ********** 2025-05-17 00:58:37.290936 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.290944 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.290951 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.290957 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.290964 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.290971 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.290977 | orchestrator | 2025-05-17 00:58:37.290984 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-17 00:58:37.290990 | orchestrator | Saturday 17 May 2025 00:47:19 +0000 (0:00:00.921) 0:01:31.392 ********** 2025-05-17 00:58:37.290997 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.291004 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.291010 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.291017 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.291023 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.291030 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.291037 | orchestrator | 2025-05-17 00:58:37.291043 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-17 00:58:37.291050 | orchestrator | Saturday 17 May 2025 00:47:19 +0000 (0:00:00.869) 0:01:32.262 ********** 2025-05-17 00:58:37.291056 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.291063 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.291069 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.291076 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.291082 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.291089 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.291095 | orchestrator | 2025-05-17 00:58:37.291102 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-17 00:58:37.291109 | orchestrator | Saturday 17 May 2025 00:47:20 +0000 (0:00:00.593) 0:01:32.855 ********** 2025-05-17 00:58:37.291115 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.291122 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.291129 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.291135 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.291142 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.291154 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.291161 | orchestrator | 2025-05-17 00:58:37.291167 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-17 00:58:37.291174 | orchestrator | Saturday 17 May 2025 00:47:21 +0000 (0:00:00.848) 0:01:33.704 ********** 2025-05-17 00:58:37.291180 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.291187 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.291194 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.291200 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.291207 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.291214 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.291220 | orchestrator | 2025-05-17 00:58:37.291227 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-17 00:58:37.291233 | orchestrator | Saturday 17 May 2025 00:47:22 +0000 (0:00:00.632) 0:01:34.337 ********** 2025-05-17 00:58:37.291240 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.291247 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.291253 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.291260 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.291270 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.291277 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.291283 | orchestrator | 2025-05-17 00:58:37.291290 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-17 00:58:37.291308 | orchestrator | Saturday 17 May 2025 00:47:22 +0000 (0:00:00.835) 0:01:35.172 ********** 2025-05-17 00:58:37.291320 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.291331 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.291343 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.291354 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.291365 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.291377 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.291384 | orchestrator | 2025-05-17 00:58:37.291390 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-17 00:58:37.291397 | orchestrator | Saturday 17 May 2025 00:47:23 +0000 (0:00:00.619) 0:01:35.792 ********** 2025-05-17 00:58:37.291404 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.291410 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.291417 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.291423 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.291430 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.291436 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.291443 | orchestrator | 2025-05-17 00:58:37.291449 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-17 00:58:37.291456 | orchestrator | Saturday 17 May 2025 00:47:24 +0000 (0:00:00.911) 0:01:36.703 ********** 2025-05-17 00:58:37.291462 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.291469 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.291475 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.291482 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.291488 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.291495 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.291501 | orchestrator | 2025-05-17 00:58:37.291508 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-17 00:58:37.291514 | orchestrator | Saturday 17 May 2025 00:47:25 +0000 (0:00:00.623) 0:01:37.326 ********** 2025-05-17 00:58:37.291521 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.291527 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.291534 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.291540 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.291547 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.291553 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.291560 | orchestrator | 2025-05-17 00:58:37.291566 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-17 00:58:37.291573 | orchestrator | Saturday 17 May 2025 00:47:25 +0000 (0:00:00.844) 0:01:38.171 ********** 2025-05-17 00:58:37.291579 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.291586 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.291592 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.291599 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.291606 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.291612 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.291619 | orchestrator | 2025-05-17 00:58:37.291625 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-17 00:58:37.291632 | orchestrator | Saturday 17 May 2025 00:47:26 +0000 (0:00:00.648) 0:01:38.820 ********** 2025-05-17 00:58:37.291639 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.291645 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.291652 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.291658 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.291665 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.291671 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.291678 | orchestrator | 2025-05-17 00:58:37.291684 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-17 00:58:37.291691 | orchestrator | Saturday 17 May 2025 00:47:27 +0000 (0:00:01.099) 0:01:39.919 ********** 2025-05-17 00:58:37.291698 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.291732 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.291738 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.291745 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.291751 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.291758 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.291764 | orchestrator | 2025-05-17 00:58:37.291771 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-17 00:58:37.291778 | orchestrator | Saturday 17 May 2025 00:47:28 +0000 (0:00:00.703) 0:01:40.622 ********** 2025-05-17 00:58:37.291784 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.291791 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.291797 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.291804 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.291810 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.291817 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.291824 | orchestrator | 2025-05-17 00:58:37.291830 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-17 00:58:37.291843 | orchestrator | Saturday 17 May 2025 00:47:29 +0000 (0:00:00.864) 0:01:41.487 ********** 2025-05-17 00:58:37.291850 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.291856 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.291863 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.291870 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.291876 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.291883 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.291889 | orchestrator | 2025-05-17 00:58:37.291896 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-17 00:58:37.291903 | orchestrator | Saturday 17 May 2025 00:47:29 +0000 (0:00:00.680) 0:01:42.167 ********** 2025-05-17 00:58:37.291909 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.291916 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.291960 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.291968 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.291974 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.291981 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.291988 | orchestrator | 2025-05-17 00:58:37.291999 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-17 00:58:37.292006 | orchestrator | Saturday 17 May 2025 00:47:30 +0000 (0:00:00.890) 0:01:43.058 ********** 2025-05-17 00:58:37.292012 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.292019 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.292026 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.292032 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.292039 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.292046 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.292053 | orchestrator | 2025-05-17 00:58:37.292059 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-17 00:58:37.292066 | orchestrator | Saturday 17 May 2025 00:47:31 +0000 (0:00:00.674) 0:01:43.732 ********** 2025-05-17 00:58:37.292073 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-17 00:58:37.292079 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-17 00:58:37.292086 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-17 00:58:37.292093 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-17 00:58:37.292099 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.292106 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-17 00:58:37.292112 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-17 00:58:37.292119 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.292126 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-17 00:58:37.292132 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-17 00:58:37.292139 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.292150 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-17 00:58:37.292157 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-17 00:58:37.292164 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.292170 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.292177 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-17 00:58:37.292183 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-17 00:58:37.292190 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.292196 | orchestrator | 2025-05-17 00:58:37.292203 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-17 00:58:37.292209 | orchestrator | Saturday 17 May 2025 00:47:32 +0000 (0:00:01.027) 0:01:44.759 ********** 2025-05-17 00:58:37.292216 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-17 00:58:37.292223 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-17 00:58:37.292229 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.292236 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-17 00:58:37.292243 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-17 00:58:37.292250 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-17 00:58:37.292256 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-17 00:58:37.292263 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.292270 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-17 00:58:37.292276 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-17 00:58:37.292283 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.292292 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-17 00:58:37.292303 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-17 00:58:37.292314 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.292324 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.292335 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-17 00:58:37.292346 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-17 00:58:37.292356 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.292365 | orchestrator | 2025-05-17 00:58:37.292375 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-17 00:58:37.292385 | orchestrator | Saturday 17 May 2025 00:47:33 +0000 (0:00:00.743) 0:01:45.503 ********** 2025-05-17 00:58:37.292394 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.292403 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.292413 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.292422 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.292432 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.292442 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.292451 | orchestrator | 2025-05-17 00:58:37.292461 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-17 00:58:37.292470 | orchestrator | Saturday 17 May 2025 00:47:34 +0000 (0:00:00.888) 0:01:46.391 ********** 2025-05-17 00:58:37.292481 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.292491 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.292501 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.292513 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.292523 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.292535 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.292545 | orchestrator | 2025-05-17 00:58:37.292563 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-17 00:58:37.292574 | orchestrator | Saturday 17 May 2025 00:47:34 +0000 (0:00:00.708) 0:01:47.099 ********** 2025-05-17 00:58:37.292585 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.292595 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.292614 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.292624 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.292635 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.292646 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.292656 | orchestrator | 2025-05-17 00:58:37.292666 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-17 00:58:37.292677 | orchestrator | Saturday 17 May 2025 00:47:35 +0000 (0:00:01.048) 0:01:48.147 ********** 2025-05-17 00:58:37.292687 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.292698 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.292709 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.292726 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.292736 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.292747 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.292757 | orchestrator | 2025-05-17 00:58:37.292767 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-17 00:58:37.292779 | orchestrator | Saturday 17 May 2025 00:47:36 +0000 (0:00:00.650) 0:01:48.798 ********** 2025-05-17 00:58:37.292789 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.292800 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.292810 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.292820 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.292829 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.292839 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.292848 | orchestrator | 2025-05-17 00:58:37.292859 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-17 00:58:37.292870 | orchestrator | Saturday 17 May 2025 00:47:37 +0000 (0:00:00.844) 0:01:49.642 ********** 2025-05-17 00:58:37.292881 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.292892 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.292903 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.292914 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.292948 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.292960 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.292971 | orchestrator | 2025-05-17 00:58:37.292982 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-17 00:58:37.292994 | orchestrator | Saturday 17 May 2025 00:47:37 +0000 (0:00:00.571) 0:01:50.213 ********** 2025-05-17 00:58:37.293004 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.293015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.293025 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.293035 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.293045 | orchestrator | 2025-05-17 00:58:37.293052 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-17 00:58:37.293058 | orchestrator | Saturday 17 May 2025 00:47:38 +0000 (0:00:00.958) 0:01:51.172 ********** 2025-05-17 00:58:37.293064 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.293070 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.293076 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.293082 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.293089 | orchestrator | 2025-05-17 00:58:37.293095 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-17 00:58:37.293101 | orchestrator | Saturday 17 May 2025 00:47:39 +0000 (0:00:00.425) 0:01:51.597 ********** 2025-05-17 00:58:37.293107 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.293114 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.293120 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.293126 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.293132 | orchestrator | 2025-05-17 00:58:37.293138 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.293151 | orchestrator | Saturday 17 May 2025 00:47:39 +0000 (0:00:00.446) 0:01:52.044 ********** 2025-05-17 00:58:37.293157 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.293163 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.293169 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.293175 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.293181 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.293187 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.293193 | orchestrator | 2025-05-17 00:58:37.293200 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-17 00:58:37.293206 | orchestrator | Saturday 17 May 2025 00:47:40 +0000 (0:00:00.637) 0:01:52.681 ********** 2025-05-17 00:58:37.293212 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-17 00:58:37.293218 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.293224 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-17 00:58:37.293230 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.293236 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-17 00:58:37.293242 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.293248 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-17 00:58:37.293254 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.293260 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-17 00:58:37.293267 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.293273 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-17 00:58:37.293279 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.293285 | orchestrator | 2025-05-17 00:58:37.293291 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-17 00:58:37.293297 | orchestrator | Saturday 17 May 2025 00:47:41 +0000 (0:00:01.020) 0:01:53.702 ********** 2025-05-17 00:58:37.293303 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.293310 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.293322 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.293329 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.293335 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.293341 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.293347 | orchestrator | 2025-05-17 00:58:37.293353 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.293359 | orchestrator | Saturday 17 May 2025 00:47:42 +0000 (0:00:00.655) 0:01:54.357 ********** 2025-05-17 00:58:37.293365 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.293371 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.293377 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.293384 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.293390 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.293396 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.293402 | orchestrator | 2025-05-17 00:58:37.293408 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-17 00:58:37.293414 | orchestrator | Saturday 17 May 2025 00:47:42 +0000 (0:00:00.790) 0:01:55.148 ********** 2025-05-17 00:58:37.293425 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-17 00:58:37.293431 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.293437 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-17 00:58:37.293443 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.293449 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-17 00:58:37.293455 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.293461 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-17 00:58:37.293467 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.293473 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-17 00:58:37.293479 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.293485 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-17 00:58:37.293492 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.293506 | orchestrator | 2025-05-17 00:58:37.293512 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-17 00:58:37.293518 | orchestrator | Saturday 17 May 2025 00:47:43 +0000 (0:00:00.747) 0:01:55.895 ********** 2025-05-17 00:58:37.293524 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.293530 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.293536 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.293543 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.293549 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.293555 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.293561 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.293567 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.293574 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.293580 | orchestrator | 2025-05-17 00:58:37.293586 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-17 00:58:37.293592 | orchestrator | Saturday 17 May 2025 00:47:44 +0000 (0:00:00.846) 0:01:56.742 ********** 2025-05-17 00:58:37.293598 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.293604 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.293610 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.293616 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.293622 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-17 00:58:37.293628 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-17 00:58:37.293634 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-17 00:58:37.293640 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.293646 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-17 00:58:37.293653 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-17 00:58:37.293659 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-17 00:58:37.293665 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.293671 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.293677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.293683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.293689 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.293695 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-17 00:58:37.293701 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-17 00:58:37.293707 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-17 00:58:37.293713 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.293719 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-17 00:58:37.293725 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-17 00:58:37.293731 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-17 00:58:37.293737 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.293743 | orchestrator | 2025-05-17 00:58:37.293749 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-17 00:58:37.293755 | orchestrator | Saturday 17 May 2025 00:47:46 +0000 (0:00:01.581) 0:01:58.324 ********** 2025-05-17 00:58:37.293761 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.293767 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.293774 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.293779 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.293786 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.293796 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.293802 | orchestrator | 2025-05-17 00:58:37.293808 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-17 00:58:37.293818 | orchestrator | Saturday 17 May 2025 00:47:47 +0000 (0:00:01.198) 0:01:59.523 ********** 2025-05-17 00:58:37.293825 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.293831 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.293837 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.293843 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-17 00:58:37.293849 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.293855 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-17 00:58:37.293861 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.293867 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-17 00:58:37.293873 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.293880 | orchestrator | 2025-05-17 00:58:37.293886 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-17 00:58:37.293892 | orchestrator | Saturday 17 May 2025 00:47:48 +0000 (0:00:01.279) 0:02:00.802 ********** 2025-05-17 00:58:37.293898 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.293904 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.293913 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.293936 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.293948 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.293955 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.293961 | orchestrator | 2025-05-17 00:58:37.293967 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-17 00:58:37.293973 | orchestrator | Saturday 17 May 2025 00:47:49 +0000 (0:00:01.218) 0:02:02.021 ********** 2025-05-17 00:58:37.293979 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.293985 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.293991 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.293997 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.294003 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.294009 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.294046 | orchestrator | 2025-05-17 00:58:37.294054 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-05-17 00:58:37.294060 | orchestrator | Saturday 17 May 2025 00:47:50 +0000 (0:00:01.247) 0:02:03.269 ********** 2025-05-17 00:58:37.294067 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.294073 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.294079 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.294085 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.294091 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.294097 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.294103 | orchestrator | 2025-05-17 00:58:37.294109 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-05-17 00:58:37.294115 | orchestrator | Saturday 17 May 2025 00:47:52 +0000 (0:00:01.454) 0:02:04.723 ********** 2025-05-17 00:58:37.294121 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.294127 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.294133 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.294140 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.294146 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.294152 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.294158 | orchestrator | 2025-05-17 00:58:37.294164 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-05-17 00:58:37.294170 | orchestrator | Saturday 17 May 2025 00:47:55 +0000 (0:00:02.996) 0:02:07.720 ********** 2025-05-17 00:58:37.294177 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.294184 | orchestrator | 2025-05-17 00:58:37.294190 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-05-17 00:58:37.294221 | orchestrator | Saturday 17 May 2025 00:47:56 +0000 (0:00:01.311) 0:02:09.031 ********** 2025-05-17 00:58:37.294228 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.294234 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.294240 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.294246 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.294253 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.294259 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.294265 | orchestrator | 2025-05-17 00:58:37.294271 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-05-17 00:58:37.294277 | orchestrator | Saturday 17 May 2025 00:47:57 +0000 (0:00:01.065) 0:02:10.097 ********** 2025-05-17 00:58:37.294283 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.294289 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.294295 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.294301 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.294308 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.294314 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.294320 | orchestrator | 2025-05-17 00:58:37.294326 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-05-17 00:58:37.294332 | orchestrator | Saturday 17 May 2025 00:47:58 +0000 (0:00:00.911) 0:02:11.009 ********** 2025-05-17 00:58:37.294338 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-17 00:58:37.294344 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-17 00:58:37.294350 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-17 00:58:37.294356 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-17 00:58:37.294362 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-17 00:58:37.294368 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-17 00:58:37.294375 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-17 00:58:37.294381 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-17 00:58:37.294387 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-17 00:58:37.294405 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-17 00:58:37.294412 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-17 00:58:37.294418 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-17 00:58:37.294424 | orchestrator | 2025-05-17 00:58:37.294430 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-05-17 00:58:37.294436 | orchestrator | Saturday 17 May 2025 00:48:00 +0000 (0:00:01.384) 0:02:12.393 ********** 2025-05-17 00:58:37.294442 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.294449 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.294455 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.294461 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.294467 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.294473 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.294480 | orchestrator | 2025-05-17 00:58:37.294486 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-05-17 00:58:37.294496 | orchestrator | Saturday 17 May 2025 00:48:01 +0000 (0:00:01.253) 0:02:13.647 ********** 2025-05-17 00:58:37.294502 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.294508 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.294514 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.294520 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.294526 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.294537 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.294543 | orchestrator | 2025-05-17 00:58:37.294550 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-05-17 00:58:37.294556 | orchestrator | Saturday 17 May 2025 00:48:01 +0000 (0:00:00.600) 0:02:14.248 ********** 2025-05-17 00:58:37.294562 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.294568 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.294574 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.294581 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.294587 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.294593 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.294599 | orchestrator | 2025-05-17 00:58:37.294605 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-05-17 00:58:37.294611 | orchestrator | Saturday 17 May 2025 00:48:02 +0000 (0:00:00.802) 0:02:15.051 ********** 2025-05-17 00:58:37.294617 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.294624 | orchestrator | 2025-05-17 00:58:37.294630 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-05-17 00:58:37.294636 | orchestrator | Saturday 17 May 2025 00:48:03 +0000 (0:00:01.221) 0:02:16.272 ********** 2025-05-17 00:58:37.294642 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.294649 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.294655 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.294661 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.294667 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.294673 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.294679 | orchestrator | 2025-05-17 00:58:37.294685 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-05-17 00:58:37.294691 | orchestrator | Saturday 17 May 2025 00:48:52 +0000 (0:00:48.814) 0:03:05.086 ********** 2025-05-17 00:58:37.294698 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-17 00:58:37.294704 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-17 00:58:37.294710 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-17 00:58:37.294716 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.294722 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-17 00:58:37.294728 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-17 00:58:37.294734 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-17 00:58:37.294740 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.294746 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-17 00:58:37.294752 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-17 00:58:37.294758 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-17 00:58:37.294765 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.294771 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-17 00:58:37.294777 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-17 00:58:37.294783 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-17 00:58:37.294789 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.294795 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-17 00:58:37.294802 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-17 00:58:37.294808 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-17 00:58:37.294814 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.294824 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-17 00:58:37.294831 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-17 00:58:37.294837 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-17 00:58:37.294843 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.294849 | orchestrator | 2025-05-17 00:58:37.294855 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-05-17 00:58:37.294866 | orchestrator | Saturday 17 May 2025 00:48:53 +0000 (0:00:01.002) 0:03:06.088 ********** 2025-05-17 00:58:37.294872 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.294879 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.294885 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.294891 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.294897 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.294903 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.294909 | orchestrator | 2025-05-17 00:58:37.294915 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-05-17 00:58:37.294962 | orchestrator | Saturday 17 May 2025 00:48:54 +0000 (0:00:00.655) 0:03:06.744 ********** 2025-05-17 00:58:37.294969 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.294976 | orchestrator | 2025-05-17 00:58:37.294982 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-05-17 00:58:37.294988 | orchestrator | Saturday 17 May 2025 00:48:54 +0000 (0:00:00.175) 0:03:06.919 ********** 2025-05-17 00:58:37.294994 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.295004 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.295011 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.295017 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.295023 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.295029 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.295035 | orchestrator | 2025-05-17 00:58:37.295041 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-05-17 00:58:37.295047 | orchestrator | Saturday 17 May 2025 00:48:55 +0000 (0:00:00.906) 0:03:07.826 ********** 2025-05-17 00:58:37.295053 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.295059 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.295064 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.295070 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.295075 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.295080 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.295086 | orchestrator | 2025-05-17 00:58:37.295091 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-05-17 00:58:37.295096 | orchestrator | Saturday 17 May 2025 00:48:56 +0000 (0:00:00.589) 0:03:08.416 ********** 2025-05-17 00:58:37.295102 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.295107 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.295112 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.295118 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.295123 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.295128 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.295134 | orchestrator | 2025-05-17 00:58:37.295139 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-05-17 00:58:37.295144 | orchestrator | Saturday 17 May 2025 00:48:56 +0000 (0:00:00.766) 0:03:09.182 ********** 2025-05-17 00:58:37.295150 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.295155 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.295161 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.295166 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.295172 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.295177 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.295182 | orchestrator | 2025-05-17 00:58:37.295187 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-05-17 00:58:37.295198 | orchestrator | Saturday 17 May 2025 00:48:59 +0000 (0:00:02.512) 0:03:11.695 ********** 2025-05-17 00:58:37.295203 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.295208 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.295214 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.295219 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.295224 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.295230 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.295235 | orchestrator | 2025-05-17 00:58:37.295240 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-05-17 00:58:37.295246 | orchestrator | Saturday 17 May 2025 00:48:59 +0000 (0:00:00.532) 0:03:12.227 ********** 2025-05-17 00:58:37.295252 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.295259 | orchestrator | 2025-05-17 00:58:37.295264 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-05-17 00:58:37.295269 | orchestrator | Saturday 17 May 2025 00:49:01 +0000 (0:00:01.092) 0:03:13.319 ********** 2025-05-17 00:58:37.295274 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.295280 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.295285 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.295290 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.295296 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.295301 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.295306 | orchestrator | 2025-05-17 00:58:37.295311 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-05-17 00:58:37.295317 | orchestrator | Saturday 17 May 2025 00:49:01 +0000 (0:00:00.836) 0:03:14.156 ********** 2025-05-17 00:58:37.295322 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.295328 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.295333 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.295338 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.295343 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.295349 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.295354 | orchestrator | 2025-05-17 00:58:37.295359 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-05-17 00:58:37.295365 | orchestrator | Saturday 17 May 2025 00:49:02 +0000 (0:00:00.655) 0:03:14.812 ********** 2025-05-17 00:58:37.295370 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.295375 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.295381 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.295386 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.295391 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.295397 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.295402 | orchestrator | 2025-05-17 00:58:37.295407 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-05-17 00:58:37.295413 | orchestrator | Saturday 17 May 2025 00:49:03 +0000 (0:00:00.812) 0:03:15.625 ********** 2025-05-17 00:58:37.295418 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.295428 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.295434 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.295439 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.295444 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.295450 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.295455 | orchestrator | 2025-05-17 00:58:37.295460 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-05-17 00:58:37.295466 | orchestrator | Saturday 17 May 2025 00:49:03 +0000 (0:00:00.656) 0:03:16.281 ********** 2025-05-17 00:58:37.295471 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.295476 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.295482 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.295487 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.295492 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.295502 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.295507 | orchestrator | 2025-05-17 00:58:37.295512 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-05-17 00:58:37.295521 | orchestrator | Saturday 17 May 2025 00:49:04 +0000 (0:00:00.921) 0:03:17.202 ********** 2025-05-17 00:58:37.295526 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.295532 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.295537 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.295542 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.295548 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.295553 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.295558 | orchestrator | 2025-05-17 00:58:37.295563 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-05-17 00:58:37.295569 | orchestrator | Saturday 17 May 2025 00:49:05 +0000 (0:00:00.731) 0:03:17.934 ********** 2025-05-17 00:58:37.295574 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.295579 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.295585 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.295590 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.295595 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.295601 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.295606 | orchestrator | 2025-05-17 00:58:37.295611 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-05-17 00:58:37.295617 | orchestrator | Saturday 17 May 2025 00:49:06 +0000 (0:00:00.894) 0:03:18.829 ********** 2025-05-17 00:58:37.295622 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.295627 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.295633 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.295638 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.295644 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.295649 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.295654 | orchestrator | 2025-05-17 00:58:37.295660 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-17 00:58:37.295665 | orchestrator | Saturday 17 May 2025 00:49:08 +0000 (0:00:01.474) 0:03:20.303 ********** 2025-05-17 00:58:37.295671 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.295676 | orchestrator | 2025-05-17 00:58:37.295681 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-05-17 00:58:37.295687 | orchestrator | Saturday 17 May 2025 00:49:09 +0000 (0:00:01.321) 0:03:21.625 ********** 2025-05-17 00:58:37.295692 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-17 00:58:37.295697 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-17 00:58:37.295703 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-17 00:58:37.295708 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-17 00:58:37.295713 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-17 00:58:37.295719 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-17 00:58:37.295724 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-17 00:58:37.295729 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-17 00:58:37.295734 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-17 00:58:37.295740 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-17 00:58:37.295745 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-17 00:58:37.295750 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-17 00:58:37.295756 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-17 00:58:37.295761 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-17 00:58:37.295766 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-17 00:58:37.295772 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-17 00:58:37.295781 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-17 00:58:37.295786 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-17 00:58:37.295792 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-17 00:58:37.295797 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-17 00:58:37.295802 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-17 00:58:37.295808 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-17 00:58:37.295813 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-17 00:58:37.295818 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-17 00:58:37.295824 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-17 00:58:37.295829 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-17 00:58:37.295834 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-17 00:58:37.295840 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-17 00:58:37.295845 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-17 00:58:37.295850 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-17 00:58:37.295859 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-17 00:58:37.295865 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-17 00:58:37.295870 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-17 00:58:37.295876 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-17 00:58:37.295881 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-17 00:58:37.295886 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-17 00:58:37.295892 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-17 00:58:37.295897 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-17 00:58:37.295902 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-17 00:58:37.295907 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-17 00:58:37.295916 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-17 00:58:37.295936 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-17 00:58:37.295942 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-17 00:58:37.295947 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-17 00:58:37.295952 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-17 00:58:37.295958 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-17 00:58:37.295963 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-17 00:58:37.295968 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-17 00:58:37.295974 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-17 00:58:37.295979 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-17 00:58:37.295984 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-17 00:58:37.295990 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-17 00:58:37.295995 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-17 00:58:37.296000 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-17 00:58:37.296006 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-17 00:58:37.296011 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-17 00:58:37.296017 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-17 00:58:37.296026 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-17 00:58:37.296031 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-17 00:58:37.296037 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-17 00:58:37.296042 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-17 00:58:37.296048 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-17 00:58:37.296053 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-17 00:58:37.296058 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-17 00:58:37.296064 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-17 00:58:37.296069 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-17 00:58:37.296074 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-17 00:58:37.296080 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-17 00:58:37.296085 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-17 00:58:37.296091 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-17 00:58:37.296096 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-17 00:58:37.296102 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-17 00:58:37.296107 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-17 00:58:37.296113 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-17 00:58:37.296118 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-17 00:58:37.296123 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-17 00:58:37.296129 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-17 00:58:37.296134 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-17 00:58:37.296139 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-17 00:58:37.296145 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-17 00:58:37.296150 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-17 00:58:37.296156 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-17 00:58:37.296161 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-17 00:58:37.296166 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-17 00:58:37.296172 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-17 00:58:37.296177 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-17 00:58:37.296182 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-17 00:58:37.296191 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-17 00:58:37.296197 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-17 00:58:37.296202 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-17 00:58:37.296207 | orchestrator | 2025-05-17 00:58:37.296213 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-17 00:58:37.296218 | orchestrator | Saturday 17 May 2025 00:49:15 +0000 (0:00:06.150) 0:03:27.776 ********** 2025-05-17 00:58:37.296224 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.296229 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.296234 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.296240 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.296245 | orchestrator | 2025-05-17 00:58:37.296250 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-05-17 00:58:37.296259 | orchestrator | Saturday 17 May 2025 00:49:16 +0000 (0:00:01.180) 0:03:28.957 ********** 2025-05-17 00:58:37.296272 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-17 00:58:37.296278 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-17 00:58:37.296283 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-17 00:58:37.296288 | orchestrator | 2025-05-17 00:58:37.296294 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-05-17 00:58:37.296299 | orchestrator | Saturday 17 May 2025 00:49:17 +0000 (0:00:01.205) 0:03:30.162 ********** 2025-05-17 00:58:37.296305 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-17 00:58:37.296310 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-17 00:58:37.296315 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-17 00:58:37.296321 | orchestrator | 2025-05-17 00:58:37.296326 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-17 00:58:37.296331 | orchestrator | Saturday 17 May 2025 00:49:19 +0000 (0:00:01.315) 0:03:31.478 ********** 2025-05-17 00:58:37.296337 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.296342 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.296347 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.296353 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.296359 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.296364 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.296369 | orchestrator | 2025-05-17 00:58:37.296375 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-17 00:58:37.296380 | orchestrator | Saturday 17 May 2025 00:49:20 +0000 (0:00:00.922) 0:03:32.401 ********** 2025-05-17 00:58:37.296385 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.296391 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.296396 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.296401 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.296407 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.296412 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.296417 | orchestrator | 2025-05-17 00:58:37.296423 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-17 00:58:37.296428 | orchestrator | Saturday 17 May 2025 00:49:20 +0000 (0:00:00.754) 0:03:33.155 ********** 2025-05-17 00:58:37.296433 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.296439 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.296444 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.296450 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.296455 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.296460 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.296466 | orchestrator | 2025-05-17 00:58:37.296471 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-17 00:58:37.296476 | orchestrator | Saturday 17 May 2025 00:49:21 +0000 (0:00:00.771) 0:03:33.927 ********** 2025-05-17 00:58:37.296482 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.296487 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.296492 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.296498 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.296503 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.296508 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.296514 | orchestrator | 2025-05-17 00:58:37.296519 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-17 00:58:37.296524 | orchestrator | Saturday 17 May 2025 00:49:22 +0000 (0:00:00.497) 0:03:34.424 ********** 2025-05-17 00:58:37.296534 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.296539 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.296545 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.296550 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.296555 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.296561 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.296566 | orchestrator | 2025-05-17 00:58:37.296571 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-17 00:58:37.296577 | orchestrator | Saturday 17 May 2025 00:49:22 +0000 (0:00:00.686) 0:03:35.111 ********** 2025-05-17 00:58:37.296582 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.296587 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.296593 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.296598 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.296604 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.296609 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.296614 | orchestrator | 2025-05-17 00:58:37.296623 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-17 00:58:37.296629 | orchestrator | Saturday 17 May 2025 00:49:23 +0000 (0:00:00.564) 0:03:35.676 ********** 2025-05-17 00:58:37.296634 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.296640 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.296645 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.296650 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.296656 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.296661 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.296666 | orchestrator | 2025-05-17 00:58:37.296672 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-17 00:58:37.296677 | orchestrator | Saturday 17 May 2025 00:49:24 +0000 (0:00:00.665) 0:03:36.341 ********** 2025-05-17 00:58:37.296682 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.296688 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.296696 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.296702 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.296707 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.296712 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.296718 | orchestrator | 2025-05-17 00:58:37.296723 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-17 00:58:37.296728 | orchestrator | Saturday 17 May 2025 00:49:24 +0000 (0:00:00.560) 0:03:36.901 ********** 2025-05-17 00:58:37.296734 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.296739 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.296744 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.296750 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.296755 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.296761 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.296766 | orchestrator | 2025-05-17 00:58:37.296772 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-17 00:58:37.296777 | orchestrator | Saturday 17 May 2025 00:49:26 +0000 (0:00:02.087) 0:03:38.989 ********** 2025-05-17 00:58:37.296782 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.296787 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.296793 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.296798 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.296804 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.296809 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.296814 | orchestrator | 2025-05-17 00:58:37.296819 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-17 00:58:37.296825 | orchestrator | Saturday 17 May 2025 00:49:27 +0000 (0:00:00.630) 0:03:39.619 ********** 2025-05-17 00:58:37.296830 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-17 00:58:37.296839 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-17 00:58:37.296844 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.296850 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-17 00:58:37.296855 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-17 00:58:37.296860 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.296866 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-17 00:58:37.296871 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-17 00:58:37.296876 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.296882 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-17 00:58:37.296887 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-17 00:58:37.296892 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.296897 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-17 00:58:37.296903 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-17 00:58:37.296908 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.296913 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-17 00:58:37.296933 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-17 00:58:37.296940 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.296945 | orchestrator | 2025-05-17 00:58:37.296950 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-17 00:58:37.296956 | orchestrator | Saturday 17 May 2025 00:49:28 +0000 (0:00:01.044) 0:03:40.664 ********** 2025-05-17 00:58:37.296961 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-17 00:58:37.296967 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-17 00:58:37.296972 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.296978 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-17 00:58:37.296983 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-17 00:58:37.296988 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.296994 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-17 00:58:37.296999 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-17 00:58:37.297004 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.297010 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-05-17 00:58:37.297015 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-05-17 00:58:37.297021 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-05-17 00:58:37.297026 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-05-17 00:58:37.297031 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-05-17 00:58:37.297037 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-05-17 00:58:37.297042 | orchestrator | 2025-05-17 00:58:37.297047 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-17 00:58:37.297053 | orchestrator | Saturday 17 May 2025 00:49:29 +0000 (0:00:00.796) 0:03:41.460 ********** 2025-05-17 00:58:37.297058 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.297064 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.297069 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.297075 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.297080 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.297086 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.297091 | orchestrator | 2025-05-17 00:58:37.297096 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-17 00:58:37.297105 | orchestrator | Saturday 17 May 2025 00:49:30 +0000 (0:00:01.075) 0:03:42.535 ********** 2025-05-17 00:58:37.297111 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.297116 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.297122 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.297127 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.297133 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.297138 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.297147 | orchestrator | 2025-05-17 00:58:37.297153 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-17 00:58:37.297158 | orchestrator | Saturday 17 May 2025 00:49:30 +0000 (0:00:00.687) 0:03:43.223 ********** 2025-05-17 00:58:37.297164 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.297169 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.297174 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.297180 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.297188 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.297194 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.297199 | orchestrator | 2025-05-17 00:58:37.297204 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-17 00:58:37.297210 | orchestrator | Saturday 17 May 2025 00:49:31 +0000 (0:00:00.878) 0:03:44.102 ********** 2025-05-17 00:58:37.297216 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.297221 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.297226 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.297232 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.297237 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.297243 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.297248 | orchestrator | 2025-05-17 00:58:37.297254 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-17 00:58:37.297259 | orchestrator | Saturday 17 May 2025 00:49:32 +0000 (0:00:00.686) 0:03:44.789 ********** 2025-05-17 00:58:37.297264 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.297270 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.297275 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.297281 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.297286 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.297291 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.297297 | orchestrator | 2025-05-17 00:58:37.297302 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-17 00:58:37.297308 | orchestrator | Saturday 17 May 2025 00:49:33 +0000 (0:00:00.890) 0:03:45.679 ********** 2025-05-17 00:58:37.297313 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.297318 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.297324 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.297329 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.297335 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.297340 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.297346 | orchestrator | 2025-05-17 00:58:37.297351 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-17 00:58:37.297357 | orchestrator | Saturday 17 May 2025 00:49:34 +0000 (0:00:00.756) 0:03:46.436 ********** 2025-05-17 00:58:37.297362 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.297367 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.297373 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.297378 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.297384 | orchestrator | 2025-05-17 00:58:37.297389 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-17 00:58:37.297394 | orchestrator | Saturday 17 May 2025 00:49:34 +0000 (0:00:00.641) 0:03:47.077 ********** 2025-05-17 00:58:37.297400 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.297405 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.297411 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.297416 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.297421 | orchestrator | 2025-05-17 00:58:37.297427 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-17 00:58:37.297432 | orchestrator | Saturday 17 May 2025 00:49:35 +0000 (0:00:00.717) 0:03:47.795 ********** 2025-05-17 00:58:37.297441 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.297446 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.297452 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.297457 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.297462 | orchestrator | 2025-05-17 00:58:37.297468 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.297473 | orchestrator | Saturday 17 May 2025 00:49:35 +0000 (0:00:00.402) 0:03:48.197 ********** 2025-05-17 00:58:37.297478 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.297484 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.297489 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.297495 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.297500 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.297506 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.297511 | orchestrator | 2025-05-17 00:58:37.297516 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-17 00:58:37.297522 | orchestrator | Saturday 17 May 2025 00:49:36 +0000 (0:00:00.666) 0:03:48.864 ********** 2025-05-17 00:58:37.297527 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-17 00:58:37.297533 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.297538 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-17 00:58:37.297543 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.297549 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-17 00:58:37.297554 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.297560 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-17 00:58:37.297565 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-17 00:58:37.297570 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-17 00:58:37.297576 | orchestrator | 2025-05-17 00:58:37.297584 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-17 00:58:37.297590 | orchestrator | Saturday 17 May 2025 00:49:38 +0000 (0:00:01.659) 0:03:50.523 ********** 2025-05-17 00:58:37.297595 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.297601 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.297606 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.297612 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.297617 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.297622 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.297628 | orchestrator | 2025-05-17 00:58:37.297633 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.297638 | orchestrator | Saturday 17 May 2025 00:49:38 +0000 (0:00:00.610) 0:03:51.134 ********** 2025-05-17 00:58:37.297644 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.297649 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.297655 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.297660 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.297665 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.297673 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.297679 | orchestrator | 2025-05-17 00:58:37.297684 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-17 00:58:37.297690 | orchestrator | Saturday 17 May 2025 00:49:39 +0000 (0:00:00.927) 0:03:52.061 ********** 2025-05-17 00:58:37.297695 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-17 00:58:37.297701 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.297706 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-17 00:58:37.297712 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.297717 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-17 00:58:37.297722 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.297728 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-17 00:58:37.297733 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.297738 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-17 00:58:37.297747 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.297753 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-17 00:58:37.297758 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.297763 | orchestrator | 2025-05-17 00:58:37.297769 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-17 00:58:37.297774 | orchestrator | Saturday 17 May 2025 00:49:40 +0000 (0:00:01.163) 0:03:53.225 ********** 2025-05-17 00:58:37.297780 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.297785 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.297790 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.297796 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.297801 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.297807 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.297812 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.297818 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.297823 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.297829 | orchestrator | 2025-05-17 00:58:37.297834 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-17 00:58:37.297840 | orchestrator | Saturday 17 May 2025 00:49:41 +0000 (0:00:00.900) 0:03:54.125 ********** 2025-05-17 00:58:37.297845 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.297850 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.297856 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.297861 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.297866 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-17 00:58:37.297872 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-17 00:58:37.297877 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-17 00:58:37.297882 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.297888 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-17 00:58:37.297893 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-17 00:58:37.297899 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-17 00:58:37.297904 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.297909 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.297915 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.297935 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-17 00:58:37.297944 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.297954 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.297962 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-17 00:58:37.297970 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-17 00:58:37.297980 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-17 00:58:37.297985 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.297991 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-17 00:58:37.297996 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-17 00:58:37.298001 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.298007 | orchestrator | 2025-05-17 00:58:37.298012 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-17 00:58:37.298178 | orchestrator | Saturday 17 May 2025 00:49:43 +0000 (0:00:01.635) 0:03:55.761 ********** 2025-05-17 00:58:37.298187 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.298194 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.298204 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.298210 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.298262 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.298270 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.298276 | orchestrator | 2025-05-17 00:58:37.298282 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-17 00:58:37.298288 | orchestrator | Saturday 17 May 2025 00:49:48 +0000 (0:00:04.731) 0:04:00.492 ********** 2025-05-17 00:58:37.298294 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.298300 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.298306 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.298311 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.298317 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.298323 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.298328 | orchestrator | 2025-05-17 00:58:37.298334 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-17 00:58:37.298340 | orchestrator | Saturday 17 May 2025 00:49:49 +0000 (0:00:01.052) 0:04:01.544 ********** 2025-05-17 00:58:37.298345 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.298351 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.298361 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.298367 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:58:37.298373 | orchestrator | 2025-05-17 00:58:37.298378 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-17 00:58:37.298383 | orchestrator | Saturday 17 May 2025 00:49:50 +0000 (0:00:00.916) 0:04:02.461 ********** 2025-05-17 00:58:37.298389 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.298394 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.298400 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.298405 | orchestrator | 2025-05-17 00:58:37.298411 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-05-17 00:58:37.298416 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.298422 | orchestrator | 2025-05-17 00:58:37.298427 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-17 00:58:37.298432 | orchestrator | Saturday 17 May 2025 00:49:51 +0000 (0:00:01.018) 0:04:03.480 ********** 2025-05-17 00:58:37.298438 | orchestrator | 2025-05-17 00:58:37.298443 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-05-17 00:58:37.298449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.298454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.298459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.298465 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.298470 | orchestrator | 2025-05-17 00:58:37.298475 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-17 00:58:37.298481 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.298486 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.298491 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.298497 | orchestrator | 2025-05-17 00:58:37.298502 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-17 00:58:37.298507 | orchestrator | Saturday 17 May 2025 00:49:52 +0000 (0:00:01.316) 0:04:04.796 ********** 2025-05-17 00:58:37.298513 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-17 00:58:37.298518 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-17 00:58:37.298523 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-17 00:58:37.298529 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.298534 | orchestrator | 2025-05-17 00:58:37.298539 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-17 00:58:37.298550 | orchestrator | Saturday 17 May 2025 00:49:53 +0000 (0:00:00.925) 0:04:05.722 ********** 2025-05-17 00:58:37.298555 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.298561 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.298566 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.298572 | orchestrator | 2025-05-17 00:58:37.298577 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-05-17 00:58:37.298582 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.298587 | orchestrator | 2025-05-17 00:58:37.298593 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-17 00:58:37.298598 | orchestrator | Saturday 17 May 2025 00:49:54 +0000 (0:00:00.772) 0:04:06.494 ********** 2025-05-17 00:58:37.298604 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.298609 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.298614 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.298619 | orchestrator | 2025-05-17 00:58:37.298625 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-05-17 00:58:37.298630 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.298635 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.298641 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.298646 | orchestrator | 2025-05-17 00:58:37.298651 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-17 00:58:37.298657 | orchestrator | Saturday 17 May 2025 00:49:54 +0000 (0:00:00.625) 0:04:07.120 ********** 2025-05-17 00:58:37.298662 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.298668 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.298673 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.298678 | orchestrator | 2025-05-17 00:58:37.298684 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-05-17 00:58:37.298689 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.298694 | orchestrator | 2025-05-17 00:58:37.298700 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-17 00:58:37.298705 | orchestrator | Saturday 17 May 2025 00:49:55 +0000 (0:00:00.774) 0:04:07.895 ********** 2025-05-17 00:58:37.298710 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.298716 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.298721 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.298726 | orchestrator | 2025-05-17 00:58:37.298732 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-05-17 00:58:37.298737 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.298742 | orchestrator | 2025-05-17 00:58:37.298748 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-17 00:58:37.298793 | orchestrator | Saturday 17 May 2025 00:49:56 +0000 (0:00:00.789) 0:04:08.684 ********** 2025-05-17 00:58:37.298800 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.298806 | orchestrator | 2025-05-17 00:58:37.298811 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-17 00:58:37.298817 | orchestrator | Saturday 17 May 2025 00:49:56 +0000 (0:00:00.133) 0:04:08.818 ********** 2025-05-17 00:58:37.298822 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.298828 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.298833 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.298838 | orchestrator | 2025-05-17 00:58:37.298844 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-05-17 00:58:37.298849 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.298854 | orchestrator | 2025-05-17 00:58:37.298860 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-17 00:58:37.298865 | orchestrator | Saturday 17 May 2025 00:49:57 +0000 (0:00:00.789) 0:04:09.607 ********** 2025-05-17 00:58:37.298871 | orchestrator | 2025-05-17 00:58:37.298879 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-05-17 00:58:37.298885 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.298890 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:58:37.298901 | orchestrator | 2025-05-17 00:58:37.298906 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-17 00:58:37.298912 | orchestrator | Saturday 17 May 2025 00:49:58 +0000 (0:00:00.766) 0:04:10.374 ********** 2025-05-17 00:58:37.298917 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.298958 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.298964 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.298970 | orchestrator | 2025-05-17 00:58:37.298975 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-05-17 00:58:37.298980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.298986 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.298991 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.298997 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.299002 | orchestrator | 2025-05-17 00:58:37.299007 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-17 00:58:37.299013 | orchestrator | Saturday 17 May 2025 00:49:59 +0000 (0:00:01.112) 0:04:11.486 ********** 2025-05-17 00:58:37.299018 | orchestrator | 2025-05-17 00:58:37.299024 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-17 00:58:37.299029 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.299034 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.299040 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.299045 | orchestrator | 2025-05-17 00:58:37.299050 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-17 00:58:37.299056 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.299061 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.299066 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.299072 | orchestrator | 2025-05-17 00:58:37.299077 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-17 00:58:37.299082 | orchestrator | Saturday 17 May 2025 00:50:00 +0000 (0:00:01.258) 0:04:12.744 ********** 2025-05-17 00:58:37.299088 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-17 00:58:37.299093 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-17 00:58:37.299098 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-17 00:58:37.299104 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.299109 | orchestrator | 2025-05-17 00:58:37.299114 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-17 00:58:37.299120 | orchestrator | Saturday 17 May 2025 00:50:01 +0000 (0:00:00.855) 0:04:13.600 ********** 2025-05-17 00:58:37.299125 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.299130 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.299136 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.299141 | orchestrator | 2025-05-17 00:58:37.299147 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-05-17 00:58:37.299152 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.299157 | orchestrator | 2025-05-17 00:58:37.299162 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-17 00:58:37.299168 | orchestrator | Saturday 17 May 2025 00:50:02 +0000 (0:00:00.935) 0:04:14.536 ********** 2025-05-17 00:58:37.299173 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.299179 | orchestrator | 2025-05-17 00:58:37.299184 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-17 00:58:37.299189 | orchestrator | Saturday 17 May 2025 00:50:02 +0000 (0:00:00.531) 0:04:15.068 ********** 2025-05-17 00:58:37.299195 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.299200 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.299205 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.299211 | orchestrator | 2025-05-17 00:58:37.299216 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-05-17 00:58:37.299226 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.299232 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.299237 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.299243 | orchestrator | 2025-05-17 00:58:37.299248 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-17 00:58:37.299253 | orchestrator | Saturday 17 May 2025 00:50:03 +0000 (0:00:01.132) 0:04:16.201 ********** 2025-05-17 00:58:37.299259 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.299264 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.299269 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.299275 | orchestrator | 2025-05-17 00:58:37.299280 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-17 00:58:37.299285 | orchestrator | Saturday 17 May 2025 00:50:05 +0000 (0:00:01.235) 0:04:17.436 ********** 2025-05-17 00:58:37.299291 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.299296 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.299301 | orchestrator | 2025-05-17 00:58:37.299347 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-17 00:58:37.299355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.299360 | orchestrator | 2025-05-17 00:58:37.299366 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-17 00:58:37.299371 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.299376 | orchestrator | 2025-05-17 00:58:37.299382 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-17 00:58:37.299387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.299393 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.299398 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.299403 | orchestrator | 2025-05-17 00:58:37.299409 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-17 00:58:37.299414 | orchestrator | Saturday 17 May 2025 00:50:06 +0000 (0:00:01.602) 0:04:19.039 ********** 2025-05-17 00:58:37.299419 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.299431 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.299437 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.299443 | orchestrator | 2025-05-17 00:58:37.299448 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-17 00:58:37.299453 | orchestrator | Saturday 17 May 2025 00:50:07 +0000 (0:00:01.064) 0:04:20.103 ********** 2025-05-17 00:58:37.299459 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.299464 | orchestrator | 2025-05-17 00:58:37.299470 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-17 00:58:37.299475 | orchestrator | Saturday 17 May 2025 00:50:08 +0000 (0:00:00.639) 0:04:20.742 ********** 2025-05-17 00:58:37.299480 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.299485 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.299490 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.299495 | orchestrator | 2025-05-17 00:58:37.299500 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-17 00:58:37.299505 | orchestrator | Saturday 17 May 2025 00:50:08 +0000 (0:00:00.539) 0:04:21.281 ********** 2025-05-17 00:58:37.299509 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.299514 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.299519 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.299524 | orchestrator | 2025-05-17 00:58:37.299529 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-17 00:58:37.299533 | orchestrator | Saturday 17 May 2025 00:50:10 +0000 (0:00:01.280) 0:04:22.562 ********** 2025-05-17 00:58:37.299538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.299543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.299548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.299556 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.299561 | orchestrator | 2025-05-17 00:58:37.299566 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-17 00:58:37.299571 | orchestrator | Saturday 17 May 2025 00:50:11 +0000 (0:00:00.730) 0:04:23.292 ********** 2025-05-17 00:58:37.299576 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.299580 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.299585 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.299590 | orchestrator | 2025-05-17 00:58:37.299595 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-17 00:58:37.299600 | orchestrator | Saturday 17 May 2025 00:50:11 +0000 (0:00:00.380) 0:04:23.673 ********** 2025-05-17 00:58:37.299604 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.299609 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.299614 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.299619 | orchestrator | 2025-05-17 00:58:37.299623 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-17 00:58:37.299628 | orchestrator | Saturday 17 May 2025 00:50:11 +0000 (0:00:00.440) 0:04:24.114 ********** 2025-05-17 00:58:37.299633 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.299638 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.299642 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.299647 | orchestrator | 2025-05-17 00:58:37.299652 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-17 00:58:37.299657 | orchestrator | Saturday 17 May 2025 00:50:12 +0000 (0:00:00.722) 0:04:24.836 ********** 2025-05-17 00:58:37.299661 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.299666 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.299671 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.299676 | orchestrator | 2025-05-17 00:58:37.299681 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-17 00:58:37.299685 | orchestrator | Saturday 17 May 2025 00:50:13 +0000 (0:00:00.461) 0:04:25.298 ********** 2025-05-17 00:58:37.299690 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.299695 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.299699 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.299704 | orchestrator | 2025-05-17 00:58:37.299709 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-17 00:58:37.299714 | orchestrator | 2025-05-17 00:58:37.299719 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-17 00:58:37.299724 | orchestrator | Saturday 17 May 2025 00:50:15 +0000 (0:00:02.255) 0:04:27.554 ********** 2025-05-17 00:58:37.299729 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:58:37.299734 | orchestrator | 2025-05-17 00:58:37.299739 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-17 00:58:37.299743 | orchestrator | Saturday 17 May 2025 00:50:16 +0000 (0:00:00.784) 0:04:28.338 ********** 2025-05-17 00:58:37.299748 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.299753 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.299758 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.299763 | orchestrator | 2025-05-17 00:58:37.299768 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-17 00:58:37.299805 | orchestrator | Saturday 17 May 2025 00:50:16 +0000 (0:00:00.733) 0:04:29.071 ********** 2025-05-17 00:58:37.299813 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.299817 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.299822 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.299827 | orchestrator | 2025-05-17 00:58:37.299832 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-17 00:58:37.299836 | orchestrator | Saturday 17 May 2025 00:50:17 +0000 (0:00:00.353) 0:04:29.425 ********** 2025-05-17 00:58:37.299841 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.299850 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.299854 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.299859 | orchestrator | 2025-05-17 00:58:37.299864 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-17 00:58:37.299869 | orchestrator | Saturday 17 May 2025 00:50:17 +0000 (0:00:00.651) 0:04:30.077 ********** 2025-05-17 00:58:37.299873 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.299878 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.299883 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.299887 | orchestrator | 2025-05-17 00:58:37.299895 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-17 00:58:37.299900 | orchestrator | Saturday 17 May 2025 00:50:18 +0000 (0:00:00.398) 0:04:30.475 ********** 2025-05-17 00:58:37.299905 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.299910 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.299915 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.299932 | orchestrator | 2025-05-17 00:58:37.299938 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-17 00:58:37.299943 | orchestrator | Saturday 17 May 2025 00:50:18 +0000 (0:00:00.799) 0:04:31.274 ********** 2025-05-17 00:58:37.299947 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.299952 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.299957 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.299962 | orchestrator | 2025-05-17 00:58:37.299967 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-17 00:58:37.299971 | orchestrator | Saturday 17 May 2025 00:50:19 +0000 (0:00:00.832) 0:04:32.106 ********** 2025-05-17 00:58:37.299976 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.299981 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.299985 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.299990 | orchestrator | 2025-05-17 00:58:37.299995 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-17 00:58:37.300000 | orchestrator | Saturday 17 May 2025 00:50:20 +0000 (0:00:00.444) 0:04:32.551 ********** 2025-05-17 00:58:37.300005 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300009 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300014 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300019 | orchestrator | 2025-05-17 00:58:37.300024 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-17 00:58:37.300028 | orchestrator | Saturday 17 May 2025 00:50:20 +0000 (0:00:00.387) 0:04:32.939 ********** 2025-05-17 00:58:37.300033 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300038 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300043 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300047 | orchestrator | 2025-05-17 00:58:37.300052 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-17 00:58:37.300057 | orchestrator | Saturday 17 May 2025 00:50:20 +0000 (0:00:00.323) 0:04:33.262 ********** 2025-05-17 00:58:37.300062 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300066 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300071 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300076 | orchestrator | 2025-05-17 00:58:37.300081 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-17 00:58:37.300086 | orchestrator | Saturday 17 May 2025 00:50:21 +0000 (0:00:00.434) 0:04:33.697 ********** 2025-05-17 00:58:37.300090 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.300095 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.300100 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.300105 | orchestrator | 2025-05-17 00:58:37.300110 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-17 00:58:37.300114 | orchestrator | Saturday 17 May 2025 00:50:22 +0000 (0:00:00.730) 0:04:34.427 ********** 2025-05-17 00:58:37.300119 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300124 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300132 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300137 | orchestrator | 2025-05-17 00:58:37.300142 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-17 00:58:37.300147 | orchestrator | Saturday 17 May 2025 00:50:22 +0000 (0:00:00.275) 0:04:34.703 ********** 2025-05-17 00:58:37.300152 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.300157 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.300161 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.300166 | orchestrator | 2025-05-17 00:58:37.300171 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-17 00:58:37.300176 | orchestrator | Saturday 17 May 2025 00:50:22 +0000 (0:00:00.343) 0:04:35.046 ********** 2025-05-17 00:58:37.300180 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300185 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300190 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300195 | orchestrator | 2025-05-17 00:58:37.300200 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-17 00:58:37.300204 | orchestrator | Saturday 17 May 2025 00:50:23 +0000 (0:00:00.519) 0:04:35.565 ********** 2025-05-17 00:58:37.300209 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300214 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300219 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300223 | orchestrator | 2025-05-17 00:58:37.300228 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-17 00:58:37.300233 | orchestrator | Saturday 17 May 2025 00:50:23 +0000 (0:00:00.301) 0:04:35.867 ********** 2025-05-17 00:58:37.300238 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300242 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300247 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300252 | orchestrator | 2025-05-17 00:58:37.300257 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-17 00:58:37.300295 | orchestrator | Saturday 17 May 2025 00:50:23 +0000 (0:00:00.291) 0:04:36.158 ********** 2025-05-17 00:58:37.300301 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300306 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300311 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300316 | orchestrator | 2025-05-17 00:58:37.300320 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-17 00:58:37.300325 | orchestrator | Saturday 17 May 2025 00:50:24 +0000 (0:00:00.279) 0:04:36.437 ********** 2025-05-17 00:58:37.300330 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300334 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300339 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300344 | orchestrator | 2025-05-17 00:58:37.300349 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-17 00:58:37.300354 | orchestrator | Saturday 17 May 2025 00:50:24 +0000 (0:00:00.456) 0:04:36.894 ********** 2025-05-17 00:58:37.300359 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.300364 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.300369 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.300373 | orchestrator | 2025-05-17 00:58:37.300382 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-17 00:58:37.300387 | orchestrator | Saturday 17 May 2025 00:50:24 +0000 (0:00:00.341) 0:04:37.235 ********** 2025-05-17 00:58:37.300391 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.300396 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.300401 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.300406 | orchestrator | 2025-05-17 00:58:37.300411 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-17 00:58:37.300415 | orchestrator | Saturday 17 May 2025 00:50:25 +0000 (0:00:00.306) 0:04:37.541 ********** 2025-05-17 00:58:37.300420 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300425 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300430 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300438 | orchestrator | 2025-05-17 00:58:37.300443 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-17 00:58:37.300448 | orchestrator | Saturday 17 May 2025 00:50:25 +0000 (0:00:00.362) 0:04:37.904 ********** 2025-05-17 00:58:37.300453 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300457 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300462 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300467 | orchestrator | 2025-05-17 00:58:37.300472 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-17 00:58:37.300476 | orchestrator | Saturday 17 May 2025 00:50:26 +0000 (0:00:00.596) 0:04:38.500 ********** 2025-05-17 00:58:37.300481 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300486 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300491 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300495 | orchestrator | 2025-05-17 00:58:37.300500 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-17 00:58:37.300505 | orchestrator | Saturday 17 May 2025 00:50:26 +0000 (0:00:00.398) 0:04:38.899 ********** 2025-05-17 00:58:37.300509 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300514 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300519 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300523 | orchestrator | 2025-05-17 00:58:37.300528 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-17 00:58:37.300533 | orchestrator | Saturday 17 May 2025 00:50:27 +0000 (0:00:00.392) 0:04:39.292 ********** 2025-05-17 00:58:37.300538 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300542 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300547 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300552 | orchestrator | 2025-05-17 00:58:37.300556 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-17 00:58:37.300561 | orchestrator | Saturday 17 May 2025 00:50:27 +0000 (0:00:00.333) 0:04:39.625 ********** 2025-05-17 00:58:37.300566 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300571 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300575 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300580 | orchestrator | 2025-05-17 00:58:37.300585 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-17 00:58:37.300590 | orchestrator | Saturday 17 May 2025 00:50:27 +0000 (0:00:00.610) 0:04:40.236 ********** 2025-05-17 00:58:37.300594 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300599 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300604 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300609 | orchestrator | 2025-05-17 00:58:37.300613 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-17 00:58:37.300618 | orchestrator | Saturday 17 May 2025 00:50:28 +0000 (0:00:00.410) 0:04:40.647 ********** 2025-05-17 00:58:37.300623 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300628 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300632 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300637 | orchestrator | 2025-05-17 00:58:37.300642 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-17 00:58:37.300647 | orchestrator | Saturday 17 May 2025 00:50:28 +0000 (0:00:00.337) 0:04:40.984 ********** 2025-05-17 00:58:37.300651 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300656 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300661 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300665 | orchestrator | 2025-05-17 00:58:37.300670 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-17 00:58:37.300675 | orchestrator | Saturday 17 May 2025 00:50:28 +0000 (0:00:00.237) 0:04:41.222 ********** 2025-05-17 00:58:37.300680 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300684 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300689 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300697 | orchestrator | 2025-05-17 00:58:37.300702 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-17 00:58:37.300707 | orchestrator | Saturday 17 May 2025 00:50:29 +0000 (0:00:00.472) 0:04:41.694 ********** 2025-05-17 00:58:37.300711 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300716 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300721 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300726 | orchestrator | 2025-05-17 00:58:37.300764 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-17 00:58:37.300771 | orchestrator | Saturday 17 May 2025 00:50:29 +0000 (0:00:00.272) 0:04:41.967 ********** 2025-05-17 00:58:37.300776 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300780 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300785 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300790 | orchestrator | 2025-05-17 00:58:37.300795 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-17 00:58:37.300799 | orchestrator | Saturday 17 May 2025 00:50:29 +0000 (0:00:00.277) 0:04:42.245 ********** 2025-05-17 00:58:37.300804 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-17 00:58:37.300809 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-17 00:58:37.300814 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-17 00:58:37.300819 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-17 00:58:37.300824 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300832 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300837 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-17 00:58:37.300841 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-17 00:58:37.300846 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300851 | orchestrator | 2025-05-17 00:58:37.300856 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-17 00:58:37.300860 | orchestrator | Saturday 17 May 2025 00:50:30 +0000 (0:00:00.307) 0:04:42.553 ********** 2025-05-17 00:58:37.300865 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-17 00:58:37.300870 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-17 00:58:37.300875 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300880 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-17 00:58:37.300884 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-17 00:58:37.300889 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300894 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-17 00:58:37.300899 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-17 00:58:37.300903 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300908 | orchestrator | 2025-05-17 00:58:37.300913 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-17 00:58:37.300917 | orchestrator | Saturday 17 May 2025 00:50:30 +0000 (0:00:00.462) 0:04:43.015 ********** 2025-05-17 00:58:37.300937 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300942 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300946 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300951 | orchestrator | 2025-05-17 00:58:37.300956 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-17 00:58:37.300960 | orchestrator | Saturday 17 May 2025 00:50:31 +0000 (0:00:00.273) 0:04:43.289 ********** 2025-05-17 00:58:37.300965 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.300970 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.300974 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.300979 | orchestrator | 2025-05-17 00:58:37.300984 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-17 00:58:37.300989 | orchestrator | Saturday 17 May 2025 00:50:31 +0000 (0:00:00.326) 0:04:43.615 ********** 2025-05-17 00:58:37.300998 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301002 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.301007 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.301012 | orchestrator | 2025-05-17 00:58:37.301017 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-17 00:58:37.301021 | orchestrator | Saturday 17 May 2025 00:50:31 +0000 (0:00:00.298) 0:04:43.913 ********** 2025-05-17 00:58:37.301026 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301031 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.301035 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.301040 | orchestrator | 2025-05-17 00:58:37.301045 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-17 00:58:37.301050 | orchestrator | Saturday 17 May 2025 00:50:32 +0000 (0:00:00.434) 0:04:44.348 ********** 2025-05-17 00:58:37.301054 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301059 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.301064 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.301068 | orchestrator | 2025-05-17 00:58:37.301073 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-17 00:58:37.301078 | orchestrator | Saturday 17 May 2025 00:50:32 +0000 (0:00:00.266) 0:04:44.614 ********** 2025-05-17 00:58:37.301083 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301087 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.301092 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.301097 | orchestrator | 2025-05-17 00:58:37.301102 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-17 00:58:37.301106 | orchestrator | Saturday 17 May 2025 00:50:32 +0000 (0:00:00.309) 0:04:44.924 ********** 2025-05-17 00:58:37.301111 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.301116 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.301120 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.301125 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301130 | orchestrator | 2025-05-17 00:58:37.301135 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-17 00:58:37.301140 | orchestrator | Saturday 17 May 2025 00:50:33 +0000 (0:00:00.411) 0:04:45.336 ********** 2025-05-17 00:58:37.301144 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.301149 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.301154 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.301158 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301163 | orchestrator | 2025-05-17 00:58:37.301202 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-17 00:58:37.301209 | orchestrator | Saturday 17 May 2025 00:50:33 +0000 (0:00:00.404) 0:04:45.740 ********** 2025-05-17 00:58:37.301214 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.301218 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.301223 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.301228 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301232 | orchestrator | 2025-05-17 00:58:37.301237 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.301242 | orchestrator | Saturday 17 May 2025 00:50:34 +0000 (0:00:00.681) 0:04:46.422 ********** 2025-05-17 00:58:37.301247 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301252 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.301256 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.301261 | orchestrator | 2025-05-17 00:58:37.301266 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-17 00:58:37.301274 | orchestrator | Saturday 17 May 2025 00:50:34 +0000 (0:00:00.601) 0:04:47.023 ********** 2025-05-17 00:58:37.301279 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-17 00:58:37.301288 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301292 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-17 00:58:37.301297 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.301302 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-17 00:58:37.301307 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.301311 | orchestrator | 2025-05-17 00:58:37.301316 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-17 00:58:37.301321 | orchestrator | Saturday 17 May 2025 00:50:35 +0000 (0:00:00.541) 0:04:47.565 ********** 2025-05-17 00:58:37.301326 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301331 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.301335 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.301340 | orchestrator | 2025-05-17 00:58:37.301345 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.301350 | orchestrator | Saturday 17 May 2025 00:50:35 +0000 (0:00:00.352) 0:04:47.918 ********** 2025-05-17 00:58:37.301354 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301359 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.301364 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.301369 | orchestrator | 2025-05-17 00:58:37.301373 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-17 00:58:37.301378 | orchestrator | Saturday 17 May 2025 00:50:35 +0000 (0:00:00.346) 0:04:48.264 ********** 2025-05-17 00:58:37.301383 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-17 00:58:37.301388 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301392 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-17 00:58:37.301397 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.301402 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-17 00:58:37.301406 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.301411 | orchestrator | 2025-05-17 00:58:37.301416 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-17 00:58:37.301421 | orchestrator | Saturday 17 May 2025 00:50:36 +0000 (0:00:01.006) 0:04:49.271 ********** 2025-05-17 00:58:37.301425 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301430 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.301435 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.301439 | orchestrator | 2025-05-17 00:58:37.301444 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-17 00:58:37.301449 | orchestrator | Saturday 17 May 2025 00:50:37 +0000 (0:00:00.380) 0:04:49.651 ********** 2025-05-17 00:58:37.301453 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.301458 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.301463 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.301467 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301472 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-17 00:58:37.301477 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-17 00:58:37.301482 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-17 00:58:37.301486 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.301491 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-17 00:58:37.301496 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-17 00:58:37.301500 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-17 00:58:37.301505 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.301510 | orchestrator | 2025-05-17 00:58:37.301515 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-17 00:58:37.301519 | orchestrator | Saturday 17 May 2025 00:50:38 +0000 (0:00:00.773) 0:04:50.425 ********** 2025-05-17 00:58:37.301524 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301529 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.301538 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.301543 | orchestrator | 2025-05-17 00:58:37.301547 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-17 00:58:37.301552 | orchestrator | Saturday 17 May 2025 00:50:38 +0000 (0:00:00.569) 0:04:50.995 ********** 2025-05-17 00:58:37.301557 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301562 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.301566 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.301571 | orchestrator | 2025-05-17 00:58:37.301576 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-17 00:58:37.301580 | orchestrator | Saturday 17 May 2025 00:50:39 +0000 (0:00:00.441) 0:04:51.436 ********** 2025-05-17 00:58:37.301585 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301590 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.301595 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.301599 | orchestrator | 2025-05-17 00:58:37.301604 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-17 00:58:37.301643 | orchestrator | Saturday 17 May 2025 00:50:39 +0000 (0:00:00.629) 0:04:52.066 ********** 2025-05-17 00:58:37.301650 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301654 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.301659 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.301664 | orchestrator | 2025-05-17 00:58:37.301669 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-05-17 00:58:37.301673 | orchestrator | Saturday 17 May 2025 00:50:40 +0000 (0:00:00.545) 0:04:52.611 ********** 2025-05-17 00:58:37.301678 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.301683 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.301688 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.301692 | orchestrator | 2025-05-17 00:58:37.301697 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-05-17 00:58:37.301702 | orchestrator | Saturday 17 May 2025 00:50:40 +0000 (0:00:00.473) 0:04:53.085 ********** 2025-05-17 00:58:37.301710 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:58:37.301715 | orchestrator | 2025-05-17 00:58:37.301720 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-05-17 00:58:37.301724 | orchestrator | Saturday 17 May 2025 00:50:41 +0000 (0:00:00.517) 0:04:53.603 ********** 2025-05-17 00:58:37.301729 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301734 | orchestrator | 2025-05-17 00:58:37.301739 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-05-17 00:58:37.301743 | orchestrator | Saturday 17 May 2025 00:50:41 +0000 (0:00:00.154) 0:04:53.757 ********** 2025-05-17 00:58:37.301748 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-17 00:58:37.301753 | orchestrator | 2025-05-17 00:58:37.301757 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-05-17 00:58:37.301762 | orchestrator | Saturday 17 May 2025 00:50:42 +0000 (0:00:00.577) 0:04:54.335 ********** 2025-05-17 00:58:37.301767 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.301772 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.301777 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.301781 | orchestrator | 2025-05-17 00:58:37.301786 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-05-17 00:58:37.301791 | orchestrator | Saturday 17 May 2025 00:50:42 +0000 (0:00:00.417) 0:04:54.753 ********** 2025-05-17 00:58:37.301796 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.301801 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.301805 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.301810 | orchestrator | 2025-05-17 00:58:37.301815 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-05-17 00:58:37.301819 | orchestrator | Saturday 17 May 2025 00:50:42 +0000 (0:00:00.415) 0:04:55.168 ********** 2025-05-17 00:58:37.301824 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.301833 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.301838 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.301843 | orchestrator | 2025-05-17 00:58:37.301847 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-05-17 00:58:37.301852 | orchestrator | Saturday 17 May 2025 00:50:44 +0000 (0:00:01.189) 0:04:56.358 ********** 2025-05-17 00:58:37.301857 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.301862 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.301866 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.301871 | orchestrator | 2025-05-17 00:58:37.301876 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-05-17 00:58:37.301881 | orchestrator | Saturday 17 May 2025 00:50:45 +0000 (0:00:01.124) 0:04:57.483 ********** 2025-05-17 00:58:37.301885 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.301890 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.301895 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.301900 | orchestrator | 2025-05-17 00:58:37.301904 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-05-17 00:58:37.301909 | orchestrator | Saturday 17 May 2025 00:50:45 +0000 (0:00:00.759) 0:04:58.242 ********** 2025-05-17 00:58:37.301914 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.301952 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.301961 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.301970 | orchestrator | 2025-05-17 00:58:37.301976 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-05-17 00:58:37.301981 | orchestrator | Saturday 17 May 2025 00:50:46 +0000 (0:00:00.769) 0:04:59.012 ********** 2025-05-17 00:58:37.301986 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.301991 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.301996 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.302000 | orchestrator | 2025-05-17 00:58:37.302005 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-05-17 00:58:37.302010 | orchestrator | Saturday 17 May 2025 00:50:47 +0000 (0:00:00.359) 0:04:59.371 ********** 2025-05-17 00:58:37.302043 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.302049 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.302054 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.302059 | orchestrator | 2025-05-17 00:58:37.302064 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-05-17 00:58:37.302069 | orchestrator | Saturday 17 May 2025 00:50:47 +0000 (0:00:00.613) 0:04:59.985 ********** 2025-05-17 00:58:37.302074 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.302078 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.302083 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.302088 | orchestrator | 2025-05-17 00:58:37.302093 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-05-17 00:58:37.302097 | orchestrator | Saturday 17 May 2025 00:50:48 +0000 (0:00:00.329) 0:05:00.315 ********** 2025-05-17 00:58:37.302102 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.302107 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.302112 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.302117 | orchestrator | 2025-05-17 00:58:37.302121 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-05-17 00:58:37.302126 | orchestrator | Saturday 17 May 2025 00:50:48 +0000 (0:00:00.364) 0:05:00.680 ********** 2025-05-17 00:58:37.302130 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.302135 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.302139 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.302144 | orchestrator | 2025-05-17 00:58:37.302169 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-05-17 00:58:37.302178 | orchestrator | Saturday 17 May 2025 00:50:49 +0000 (0:00:01.255) 0:05:01.936 ********** 2025-05-17 00:58:37.302185 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.302193 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.302205 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.302210 | orchestrator | 2025-05-17 00:58:37.302214 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-05-17 00:58:37.302219 | orchestrator | Saturday 17 May 2025 00:50:50 +0000 (0:00:00.665) 0:05:02.601 ********** 2025-05-17 00:58:37.302223 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:58:37.302228 | orchestrator | 2025-05-17 00:58:37.302232 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-05-17 00:58:37.302237 | orchestrator | Saturday 17 May 2025 00:50:50 +0000 (0:00:00.508) 0:05:03.110 ********** 2025-05-17 00:58:37.302244 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.302249 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.302253 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.302258 | orchestrator | 2025-05-17 00:58:37.302262 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-05-17 00:58:37.302267 | orchestrator | Saturday 17 May 2025 00:50:51 +0000 (0:00:00.285) 0:05:03.395 ********** 2025-05-17 00:58:37.302271 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.302276 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.302280 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.302284 | orchestrator | 2025-05-17 00:58:37.302289 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-05-17 00:58:37.302293 | orchestrator | Saturday 17 May 2025 00:50:51 +0000 (0:00:00.433) 0:05:03.829 ********** 2025-05-17 00:58:37.302297 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:58:37.302302 | orchestrator | 2025-05-17 00:58:37.302316 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-05-17 00:58:37.302321 | orchestrator | Saturday 17 May 2025 00:50:52 +0000 (0:00:00.541) 0:05:04.371 ********** 2025-05-17 00:58:37.302326 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.302330 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.302335 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.302339 | orchestrator | 2025-05-17 00:58:37.302344 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-05-17 00:58:37.302348 | orchestrator | Saturday 17 May 2025 00:50:53 +0000 (0:00:01.137) 0:05:05.508 ********** 2025-05-17 00:58:37.302352 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.302357 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.302361 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.302366 | orchestrator | 2025-05-17 00:58:37.302370 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-05-17 00:58:37.302375 | orchestrator | Saturday 17 May 2025 00:50:54 +0000 (0:00:01.255) 0:05:06.763 ********** 2025-05-17 00:58:37.302379 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.302384 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.302388 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.302393 | orchestrator | 2025-05-17 00:58:37.302397 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-05-17 00:58:37.302402 | orchestrator | Saturday 17 May 2025 00:50:56 +0000 (0:00:01.558) 0:05:08.322 ********** 2025-05-17 00:58:37.302406 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.302410 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.302415 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.302420 | orchestrator | 2025-05-17 00:58:37.302424 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-05-17 00:58:37.302428 | orchestrator | Saturday 17 May 2025 00:50:57 +0000 (0:00:01.761) 0:05:10.083 ********** 2025-05-17 00:58:37.302433 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:58:37.302437 | orchestrator | 2025-05-17 00:58:37.302442 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-05-17 00:58:37.302450 | orchestrator | Saturday 17 May 2025 00:50:58 +0000 (0:00:00.931) 0:05:11.015 ********** 2025-05-17 00:58:37.302454 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-17 00:58:37.302459 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.302463 | orchestrator | 2025-05-17 00:58:37.302468 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-05-17 00:58:37.302472 | orchestrator | Saturday 17 May 2025 00:51:20 +0000 (0:00:21.481) 0:05:32.497 ********** 2025-05-17 00:58:37.302477 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.302482 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.302486 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.302491 | orchestrator | 2025-05-17 00:58:37.302495 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-05-17 00:58:37.302500 | orchestrator | Saturday 17 May 2025 00:51:27 +0000 (0:00:07.205) 0:05:39.703 ********** 2025-05-17 00:58:37.302504 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.302509 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.302513 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.302518 | orchestrator | 2025-05-17 00:58:37.302522 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-17 00:58:37.302527 | orchestrator | Saturday 17 May 2025 00:51:28 +0000 (0:00:01.225) 0:05:40.929 ********** 2025-05-17 00:58:37.302531 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.302536 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.302540 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.302544 | orchestrator | 2025-05-17 00:58:37.302549 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-17 00:58:37.302553 | orchestrator | Saturday 17 May 2025 00:51:29 +0000 (0:00:00.689) 0:05:41.618 ********** 2025-05-17 00:58:37.302573 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:58:37.302578 | orchestrator | 2025-05-17 00:58:37.302583 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-17 00:58:37.302587 | orchestrator | Saturday 17 May 2025 00:51:30 +0000 (0:00:00.710) 0:05:42.328 ********** 2025-05-17 00:58:37.302592 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.302596 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.302601 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.302605 | orchestrator | 2025-05-17 00:58:37.302610 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-17 00:58:37.302614 | orchestrator | Saturday 17 May 2025 00:51:30 +0000 (0:00:00.337) 0:05:42.665 ********** 2025-05-17 00:58:37.302619 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.302623 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.302628 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.302632 | orchestrator | 2025-05-17 00:58:37.302637 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-17 00:58:37.302644 | orchestrator | Saturday 17 May 2025 00:51:31 +0000 (0:00:01.094) 0:05:43.760 ********** 2025-05-17 00:58:37.302649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-17 00:58:37.302653 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-17 00:58:37.302658 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-17 00:58:37.302662 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.302667 | orchestrator | 2025-05-17 00:58:37.302672 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-17 00:58:37.302676 | orchestrator | Saturday 17 May 2025 00:51:32 +0000 (0:00:01.121) 0:05:44.881 ********** 2025-05-17 00:58:37.302681 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.302685 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.302690 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.302694 | orchestrator | 2025-05-17 00:58:37.302699 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-17 00:58:37.302707 | orchestrator | Saturday 17 May 2025 00:51:32 +0000 (0:00:00.336) 0:05:45.218 ********** 2025-05-17 00:58:37.302712 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.302716 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.302721 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.302725 | orchestrator | 2025-05-17 00:58:37.302730 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-17 00:58:37.302734 | orchestrator | 2025-05-17 00:58:37.302739 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-17 00:58:37.302743 | orchestrator | Saturday 17 May 2025 00:51:35 +0000 (0:00:02.123) 0:05:47.341 ********** 2025-05-17 00:58:37.302748 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:58:37.302752 | orchestrator | 2025-05-17 00:58:37.302757 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-17 00:58:37.302761 | orchestrator | Saturday 17 May 2025 00:51:35 +0000 (0:00:00.784) 0:05:48.126 ********** 2025-05-17 00:58:37.302766 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.302770 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.302775 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.302779 | orchestrator | 2025-05-17 00:58:37.302784 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-17 00:58:37.302788 | orchestrator | Saturday 17 May 2025 00:51:36 +0000 (0:00:00.700) 0:05:48.826 ********** 2025-05-17 00:58:37.302793 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.302797 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.302802 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.302806 | orchestrator | 2025-05-17 00:58:37.302811 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-17 00:58:37.302815 | orchestrator | Saturday 17 May 2025 00:51:36 +0000 (0:00:00.297) 0:05:49.124 ********** 2025-05-17 00:58:37.302820 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.302824 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.302829 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.302833 | orchestrator | 2025-05-17 00:58:37.302838 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-17 00:58:37.302842 | orchestrator | Saturday 17 May 2025 00:51:37 +0000 (0:00:00.596) 0:05:49.720 ********** 2025-05-17 00:58:37.302847 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.302851 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.302856 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.302860 | orchestrator | 2025-05-17 00:58:37.302865 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-17 00:58:37.302869 | orchestrator | Saturday 17 May 2025 00:51:37 +0000 (0:00:00.423) 0:05:50.144 ********** 2025-05-17 00:58:37.302874 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.302879 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.302883 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.302888 | orchestrator | 2025-05-17 00:58:37.302892 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-17 00:58:37.302897 | orchestrator | Saturday 17 May 2025 00:51:38 +0000 (0:00:00.890) 0:05:51.034 ********** 2025-05-17 00:58:37.302902 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.302906 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.302911 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.302915 | orchestrator | 2025-05-17 00:58:37.302935 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-17 00:58:37.302941 | orchestrator | Saturday 17 May 2025 00:51:39 +0000 (0:00:00.314) 0:05:51.349 ********** 2025-05-17 00:58:37.302945 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.302950 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.302954 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.302959 | orchestrator | 2025-05-17 00:58:37.302963 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-17 00:58:37.302971 | orchestrator | Saturday 17 May 2025 00:51:39 +0000 (0:00:00.554) 0:05:51.904 ********** 2025-05-17 00:58:37.302976 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.302980 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.302985 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.302989 | orchestrator | 2025-05-17 00:58:37.303008 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-17 00:58:37.303013 | orchestrator | Saturday 17 May 2025 00:51:39 +0000 (0:00:00.350) 0:05:52.254 ********** 2025-05-17 00:58:37.303018 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303022 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303027 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303031 | orchestrator | 2025-05-17 00:58:37.303036 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-17 00:58:37.303040 | orchestrator | Saturday 17 May 2025 00:51:40 +0000 (0:00:00.337) 0:05:52.592 ********** 2025-05-17 00:58:37.303045 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303049 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303053 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303058 | orchestrator | 2025-05-17 00:58:37.303062 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-17 00:58:37.303067 | orchestrator | Saturday 17 May 2025 00:51:40 +0000 (0:00:00.349) 0:05:52.941 ********** 2025-05-17 00:58:37.303074 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.303079 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.303084 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.303088 | orchestrator | 2025-05-17 00:58:37.303093 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-17 00:58:37.303097 | orchestrator | Saturday 17 May 2025 00:51:41 +0000 (0:00:01.007) 0:05:53.949 ********** 2025-05-17 00:58:37.303102 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303106 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303111 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303115 | orchestrator | 2025-05-17 00:58:37.303120 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-17 00:58:37.303124 | orchestrator | Saturday 17 May 2025 00:51:41 +0000 (0:00:00.309) 0:05:54.258 ********** 2025-05-17 00:58:37.303129 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.303134 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.303138 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.303143 | orchestrator | 2025-05-17 00:58:37.303147 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-17 00:58:37.303152 | orchestrator | Saturday 17 May 2025 00:51:42 +0000 (0:00:00.316) 0:05:54.575 ********** 2025-05-17 00:58:37.303156 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303161 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303165 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303170 | orchestrator | 2025-05-17 00:58:37.303174 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-17 00:58:37.303179 | orchestrator | Saturday 17 May 2025 00:51:42 +0000 (0:00:00.300) 0:05:54.876 ********** 2025-05-17 00:58:37.303183 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303188 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303192 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303196 | orchestrator | 2025-05-17 00:58:37.303201 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-17 00:58:37.303205 | orchestrator | Saturday 17 May 2025 00:51:43 +0000 (0:00:00.547) 0:05:55.424 ********** 2025-05-17 00:58:37.303210 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303214 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303219 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303223 | orchestrator | 2025-05-17 00:58:37.303228 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-17 00:58:37.303232 | orchestrator | Saturday 17 May 2025 00:51:43 +0000 (0:00:00.325) 0:05:55.749 ********** 2025-05-17 00:58:37.303240 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303245 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303249 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303254 | orchestrator | 2025-05-17 00:58:37.303258 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-17 00:58:37.303262 | orchestrator | Saturday 17 May 2025 00:51:43 +0000 (0:00:00.320) 0:05:56.070 ********** 2025-05-17 00:58:37.303267 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303271 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303276 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303280 | orchestrator | 2025-05-17 00:58:37.303285 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-17 00:58:37.303289 | orchestrator | Saturday 17 May 2025 00:51:44 +0000 (0:00:00.340) 0:05:56.411 ********** 2025-05-17 00:58:37.303294 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.303298 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.303303 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.303308 | orchestrator | 2025-05-17 00:58:37.303312 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-17 00:58:37.303317 | orchestrator | Saturday 17 May 2025 00:51:44 +0000 (0:00:00.621) 0:05:57.032 ********** 2025-05-17 00:58:37.303321 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.303326 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.303330 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.303335 | orchestrator | 2025-05-17 00:58:37.303339 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-17 00:58:37.303344 | orchestrator | Saturday 17 May 2025 00:51:45 +0000 (0:00:00.387) 0:05:57.420 ********** 2025-05-17 00:58:37.303348 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303353 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303357 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303362 | orchestrator | 2025-05-17 00:58:37.303366 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-17 00:58:37.303370 | orchestrator | Saturday 17 May 2025 00:51:45 +0000 (0:00:00.347) 0:05:57.767 ********** 2025-05-17 00:58:37.303375 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303379 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303384 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303388 | orchestrator | 2025-05-17 00:58:37.303393 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-17 00:58:37.303397 | orchestrator | Saturday 17 May 2025 00:51:45 +0000 (0:00:00.339) 0:05:58.106 ********** 2025-05-17 00:58:37.303402 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303406 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303411 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303415 | orchestrator | 2025-05-17 00:58:37.303434 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-17 00:58:37.303439 | orchestrator | Saturday 17 May 2025 00:51:46 +0000 (0:00:00.597) 0:05:58.703 ********** 2025-05-17 00:58:37.303443 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303448 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303452 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303457 | orchestrator | 2025-05-17 00:58:37.303462 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-17 00:58:37.303466 | orchestrator | Saturday 17 May 2025 00:51:46 +0000 (0:00:00.347) 0:05:59.051 ********** 2025-05-17 00:58:37.303471 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303475 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303480 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303484 | orchestrator | 2025-05-17 00:58:37.303489 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-17 00:58:37.303493 | orchestrator | Saturday 17 May 2025 00:51:47 +0000 (0:00:00.321) 0:05:59.372 ********** 2025-05-17 00:58:37.303501 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303524 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303530 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303534 | orchestrator | 2025-05-17 00:58:37.303539 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-17 00:58:37.303543 | orchestrator | Saturday 17 May 2025 00:51:47 +0000 (0:00:00.308) 0:05:59.681 ********** 2025-05-17 00:58:37.303548 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303552 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303557 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303561 | orchestrator | 2025-05-17 00:58:37.303566 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-17 00:58:37.303571 | orchestrator | Saturday 17 May 2025 00:51:47 +0000 (0:00:00.571) 0:06:00.252 ********** 2025-05-17 00:58:37.303575 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303580 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303584 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303588 | orchestrator | 2025-05-17 00:58:37.303593 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-17 00:58:37.303598 | orchestrator | Saturday 17 May 2025 00:51:48 +0000 (0:00:00.350) 0:06:00.602 ********** 2025-05-17 00:58:37.303602 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303606 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303611 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303615 | orchestrator | 2025-05-17 00:58:37.303620 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-17 00:58:37.303625 | orchestrator | Saturday 17 May 2025 00:51:48 +0000 (0:00:00.347) 0:06:00.950 ********** 2025-05-17 00:58:37.303629 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303633 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303638 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303642 | orchestrator | 2025-05-17 00:58:37.303647 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-17 00:58:37.303651 | orchestrator | Saturday 17 May 2025 00:51:48 +0000 (0:00:00.317) 0:06:01.268 ********** 2025-05-17 00:58:37.303656 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303660 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303665 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303669 | orchestrator | 2025-05-17 00:58:37.303674 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-17 00:58:37.303678 | orchestrator | Saturday 17 May 2025 00:51:49 +0000 (0:00:00.597) 0:06:01.865 ********** 2025-05-17 00:58:37.303683 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303687 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303692 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303696 | orchestrator | 2025-05-17 00:58:37.303701 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-17 00:58:37.303705 | orchestrator | Saturday 17 May 2025 00:51:49 +0000 (0:00:00.380) 0:06:02.246 ********** 2025-05-17 00:58:37.303710 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-17 00:58:37.303715 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-17 00:58:37.303719 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303723 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-17 00:58:37.303728 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-17 00:58:37.303732 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303737 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-17 00:58:37.303741 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-17 00:58:37.303746 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303750 | orchestrator | 2025-05-17 00:58:37.303755 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-17 00:58:37.303759 | orchestrator | Saturday 17 May 2025 00:51:50 +0000 (0:00:00.425) 0:06:02.671 ********** 2025-05-17 00:58:37.303767 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-17 00:58:37.303772 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-17 00:58:37.303776 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-17 00:58:37.303780 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-17 00:58:37.303785 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303789 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303794 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-17 00:58:37.303798 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-17 00:58:37.303803 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303807 | orchestrator | 2025-05-17 00:58:37.303812 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-17 00:58:37.303816 | orchestrator | Saturday 17 May 2025 00:51:50 +0000 (0:00:00.409) 0:06:03.081 ********** 2025-05-17 00:58:37.303821 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303825 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303830 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303834 | orchestrator | 2025-05-17 00:58:37.303853 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-17 00:58:37.303859 | orchestrator | Saturday 17 May 2025 00:51:51 +0000 (0:00:00.612) 0:06:03.693 ********** 2025-05-17 00:58:37.303863 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303868 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303872 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303877 | orchestrator | 2025-05-17 00:58:37.303881 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-17 00:58:37.303886 | orchestrator | Saturday 17 May 2025 00:51:51 +0000 (0:00:00.380) 0:06:04.074 ********** 2025-05-17 00:58:37.303890 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303895 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303899 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303904 | orchestrator | 2025-05-17 00:58:37.303908 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-17 00:58:37.303916 | orchestrator | Saturday 17 May 2025 00:51:52 +0000 (0:00:00.361) 0:06:04.436 ********** 2025-05-17 00:58:37.303935 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303940 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303945 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303949 | orchestrator | 2025-05-17 00:58:37.303953 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-17 00:58:37.303958 | orchestrator | Saturday 17 May 2025 00:51:52 +0000 (0:00:00.374) 0:06:04.810 ********** 2025-05-17 00:58:37.303962 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303967 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303971 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.303976 | orchestrator | 2025-05-17 00:58:37.303980 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-17 00:58:37.303985 | orchestrator | Saturday 17 May 2025 00:51:53 +0000 (0:00:00.614) 0:06:05.424 ********** 2025-05-17 00:58:37.303989 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.303994 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.303998 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.304002 | orchestrator | 2025-05-17 00:58:37.304007 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-17 00:58:37.304012 | orchestrator | Saturday 17 May 2025 00:51:53 +0000 (0:00:00.364) 0:06:05.788 ********** 2025-05-17 00:58:37.304016 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.304020 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.304025 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.304034 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.304038 | orchestrator | 2025-05-17 00:58:37.304043 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-17 00:58:37.304047 | orchestrator | Saturday 17 May 2025 00:51:53 +0000 (0:00:00.444) 0:06:06.233 ********** 2025-05-17 00:58:37.304052 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.304056 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.304061 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.304065 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.304070 | orchestrator | 2025-05-17 00:58:37.304074 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-17 00:58:37.304079 | orchestrator | Saturday 17 May 2025 00:51:54 +0000 (0:00:00.447) 0:06:06.680 ********** 2025-05-17 00:58:37.304083 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.304088 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.304092 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.304097 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.304101 | orchestrator | 2025-05-17 00:58:37.304105 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.304110 | orchestrator | Saturday 17 May 2025 00:51:54 +0000 (0:00:00.394) 0:06:07.074 ********** 2025-05-17 00:58:37.304115 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.304119 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.304123 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.304128 | orchestrator | 2025-05-17 00:58:37.304132 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-17 00:58:37.304137 | orchestrator | Saturday 17 May 2025 00:51:55 +0000 (0:00:00.618) 0:06:07.692 ********** 2025-05-17 00:58:37.304141 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-17 00:58:37.304146 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.304150 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-17 00:58:37.304155 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.304159 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-17 00:58:37.304164 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.304168 | orchestrator | 2025-05-17 00:58:37.304173 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-17 00:58:37.304177 | orchestrator | Saturday 17 May 2025 00:51:55 +0000 (0:00:00.529) 0:06:08.222 ********** 2025-05-17 00:58:37.304182 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.304186 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.304191 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.304195 | orchestrator | 2025-05-17 00:58:37.304200 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.304204 | orchestrator | Saturday 17 May 2025 00:51:56 +0000 (0:00:00.318) 0:06:08.541 ********** 2025-05-17 00:58:37.304209 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.304213 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.304218 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.304222 | orchestrator | 2025-05-17 00:58:37.304227 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-17 00:58:37.304231 | orchestrator | Saturday 17 May 2025 00:51:56 +0000 (0:00:00.293) 0:06:08.834 ********** 2025-05-17 00:58:37.304236 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-17 00:58:37.304255 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.304260 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-17 00:58:37.304265 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.304269 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-17 00:58:37.304273 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.304278 | orchestrator | 2025-05-17 00:58:37.304283 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-17 00:58:37.305493 | orchestrator | Saturday 17 May 2025 00:51:57 +0000 (0:00:00.828) 0:06:09.663 ********** 2025-05-17 00:58:37.305511 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.305517 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.305521 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.305526 | orchestrator | 2025-05-17 00:58:37.305531 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-17 00:58:37.305535 | orchestrator | Saturday 17 May 2025 00:51:57 +0000 (0:00:00.274) 0:06:09.938 ********** 2025-05-17 00:58:37.305540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.305545 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.305550 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.305555 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.305560 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-17 00:58:37.305564 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-17 00:58:37.305569 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-17 00:58:37.305573 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.305578 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-17 00:58:37.305583 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-17 00:58:37.305587 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-17 00:58:37.305592 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.305596 | orchestrator | 2025-05-17 00:58:37.305601 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-17 00:58:37.305605 | orchestrator | Saturday 17 May 2025 00:51:58 +0000 (0:00:00.575) 0:06:10.514 ********** 2025-05-17 00:58:37.305610 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.305642 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.305647 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.305651 | orchestrator | 2025-05-17 00:58:37.305656 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-17 00:58:37.305660 | orchestrator | Saturday 17 May 2025 00:51:58 +0000 (0:00:00.630) 0:06:11.144 ********** 2025-05-17 00:58:37.305665 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.305669 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.305674 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.305678 | orchestrator | 2025-05-17 00:58:37.305683 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-17 00:58:37.305688 | orchestrator | Saturday 17 May 2025 00:51:59 +0000 (0:00:00.463) 0:06:11.607 ********** 2025-05-17 00:58:37.305692 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.305697 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.305701 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.305706 | orchestrator | 2025-05-17 00:58:37.305711 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-17 00:58:37.305715 | orchestrator | Saturday 17 May 2025 00:51:59 +0000 (0:00:00.638) 0:06:12.246 ********** 2025-05-17 00:58:37.305720 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.305724 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.305729 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.305733 | orchestrator | 2025-05-17 00:58:37.305738 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-05-17 00:58:37.305742 | orchestrator | Saturday 17 May 2025 00:52:00 +0000 (0:00:00.494) 0:06:12.740 ********** 2025-05-17 00:58:37.305747 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-17 00:58:37.305752 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-17 00:58:37.305756 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-17 00:58:37.305761 | orchestrator | 2025-05-17 00:58:37.305773 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-05-17 00:58:37.305777 | orchestrator | Saturday 17 May 2025 00:52:01 +0000 (0:00:00.753) 0:06:13.493 ********** 2025-05-17 00:58:37.305782 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:58:37.305787 | orchestrator | 2025-05-17 00:58:37.305792 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-05-17 00:58:37.305796 | orchestrator | Saturday 17 May 2025 00:52:01 +0000 (0:00:00.787) 0:06:14.281 ********** 2025-05-17 00:58:37.305801 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.305805 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.305810 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.305814 | orchestrator | 2025-05-17 00:58:37.305819 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-05-17 00:58:37.305823 | orchestrator | Saturday 17 May 2025 00:52:02 +0000 (0:00:00.663) 0:06:14.944 ********** 2025-05-17 00:58:37.305827 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.305832 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.305836 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.305841 | orchestrator | 2025-05-17 00:58:37.305845 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-05-17 00:58:37.305850 | orchestrator | Saturday 17 May 2025 00:52:02 +0000 (0:00:00.318) 0:06:15.263 ********** 2025-05-17 00:58:37.305854 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-17 00:58:37.305859 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-17 00:58:37.305863 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-17 00:58:37.305868 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-17 00:58:37.305872 | orchestrator | 2025-05-17 00:58:37.305917 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-05-17 00:58:37.305971 | orchestrator | Saturday 17 May 2025 00:52:11 +0000 (0:00:08.317) 0:06:23.580 ********** 2025-05-17 00:58:37.305977 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.305981 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.305986 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.305990 | orchestrator | 2025-05-17 00:58:37.305995 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-05-17 00:58:37.306000 | orchestrator | Saturday 17 May 2025 00:52:11 +0000 (0:00:00.409) 0:06:23.990 ********** 2025-05-17 00:58:37.306004 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-17 00:58:37.306009 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-17 00:58:37.306042 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-17 00:58:37.306049 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-17 00:58:37.306054 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 00:58:37.306062 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 00:58:37.306067 | orchestrator | 2025-05-17 00:58:37.306071 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-05-17 00:58:37.306076 | orchestrator | Saturday 17 May 2025 00:52:13 +0000 (0:00:02.107) 0:06:26.098 ********** 2025-05-17 00:58:37.306081 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-17 00:58:37.306085 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-17 00:58:37.306089 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-17 00:58:37.306093 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-17 00:58:37.306097 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-17 00:58:37.306101 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-17 00:58:37.306105 | orchestrator | 2025-05-17 00:58:37.306109 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-05-17 00:58:37.306113 | orchestrator | Saturday 17 May 2025 00:52:15 +0000 (0:00:01.262) 0:06:27.360 ********** 2025-05-17 00:58:37.306117 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.306126 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.306130 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.306134 | orchestrator | 2025-05-17 00:58:37.306138 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-05-17 00:58:37.306142 | orchestrator | Saturday 17 May 2025 00:52:15 +0000 (0:00:00.672) 0:06:28.032 ********** 2025-05-17 00:58:37.306146 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.306150 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.306155 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.306159 | orchestrator | 2025-05-17 00:58:37.306163 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-05-17 00:58:37.306167 | orchestrator | Saturday 17 May 2025 00:52:16 +0000 (0:00:00.586) 0:06:28.618 ********** 2025-05-17 00:58:37.306171 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.306175 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.306179 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.306183 | orchestrator | 2025-05-17 00:58:37.306187 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-05-17 00:58:37.306191 | orchestrator | Saturday 17 May 2025 00:52:16 +0000 (0:00:00.335) 0:06:28.954 ********** 2025-05-17 00:58:37.306195 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:58:37.306199 | orchestrator | 2025-05-17 00:58:37.306204 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-05-17 00:58:37.306208 | orchestrator | Saturday 17 May 2025 00:52:17 +0000 (0:00:00.565) 0:06:29.520 ********** 2025-05-17 00:58:37.306212 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.306216 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.306220 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.306224 | orchestrator | 2025-05-17 00:58:37.306228 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-05-17 00:58:37.306232 | orchestrator | Saturday 17 May 2025 00:52:17 +0000 (0:00:00.567) 0:06:30.088 ********** 2025-05-17 00:58:37.306236 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.306240 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.306244 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.306248 | orchestrator | 2025-05-17 00:58:37.306252 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-05-17 00:58:37.306256 | orchestrator | Saturday 17 May 2025 00:52:18 +0000 (0:00:00.376) 0:06:30.464 ********** 2025-05-17 00:58:37.306261 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:58:37.306265 | orchestrator | 2025-05-17 00:58:37.306269 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-05-17 00:58:37.306273 | orchestrator | Saturday 17 May 2025 00:52:18 +0000 (0:00:00.569) 0:06:31.033 ********** 2025-05-17 00:58:37.306277 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.306281 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.306285 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.306289 | orchestrator | 2025-05-17 00:58:37.306293 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-05-17 00:58:37.306297 | orchestrator | Saturday 17 May 2025 00:52:20 +0000 (0:00:01.393) 0:06:32.427 ********** 2025-05-17 00:58:37.306302 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.306306 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.306310 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.306314 | orchestrator | 2025-05-17 00:58:37.306318 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-05-17 00:58:37.306322 | orchestrator | Saturday 17 May 2025 00:52:21 +0000 (0:00:01.055) 0:06:33.482 ********** 2025-05-17 00:58:37.306326 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.306330 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.306334 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.306342 | orchestrator | 2025-05-17 00:58:37.306346 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-05-17 00:58:37.306367 | orchestrator | Saturday 17 May 2025 00:52:22 +0000 (0:00:01.754) 0:06:35.236 ********** 2025-05-17 00:58:37.306372 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.306376 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.306380 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.306384 | orchestrator | 2025-05-17 00:58:37.306388 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-05-17 00:58:37.306392 | orchestrator | Saturday 17 May 2025 00:52:24 +0000 (0:00:01.883) 0:06:37.120 ********** 2025-05-17 00:58:37.306396 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.306400 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.306405 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-17 00:58:37.306409 | orchestrator | 2025-05-17 00:58:37.306413 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-05-17 00:58:37.306417 | orchestrator | Saturday 17 May 2025 00:52:25 +0000 (0:00:00.685) 0:06:37.806 ********** 2025-05-17 00:58:37.306424 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-05-17 00:58:37.306428 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-05-17 00:58:37.306432 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-17 00:58:37.306436 | orchestrator | 2025-05-17 00:58:37.306440 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-05-17 00:58:37.306444 | orchestrator | Saturday 17 May 2025 00:52:38 +0000 (0:00:13.266) 0:06:51.073 ********** 2025-05-17 00:58:37.306448 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-17 00:58:37.306452 | orchestrator | 2025-05-17 00:58:37.306457 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-17 00:58:37.306461 | orchestrator | Saturday 17 May 2025 00:52:40 +0000 (0:00:02.039) 0:06:53.112 ********** 2025-05-17 00:58:37.306465 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.306469 | orchestrator | 2025-05-17 00:58:37.306473 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-05-17 00:58:37.306477 | orchestrator | Saturday 17 May 2025 00:52:41 +0000 (0:00:00.453) 0:06:53.565 ********** 2025-05-17 00:58:37.306481 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.306486 | orchestrator | 2025-05-17 00:58:37.306490 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-05-17 00:58:37.306494 | orchestrator | Saturday 17 May 2025 00:52:41 +0000 (0:00:00.302) 0:06:53.867 ********** 2025-05-17 00:58:37.306498 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-17 00:58:37.306502 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-17 00:58:37.306506 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-17 00:58:37.306510 | orchestrator | 2025-05-17 00:58:37.306514 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-05-17 00:58:37.306518 | orchestrator | Saturday 17 May 2025 00:52:47 +0000 (0:00:06.212) 0:07:00.080 ********** 2025-05-17 00:58:37.306522 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-17 00:58:37.306526 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-17 00:58:37.306530 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-17 00:58:37.306534 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-17 00:58:37.306539 | orchestrator | 2025-05-17 00:58:37.306543 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-17 00:58:37.306547 | orchestrator | Saturday 17 May 2025 00:52:52 +0000 (0:00:05.010) 0:07:05.090 ********** 2025-05-17 00:58:37.306551 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.306558 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.306562 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.306566 | orchestrator | 2025-05-17 00:58:37.306571 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-17 00:58:37.306575 | orchestrator | Saturday 17 May 2025 00:52:53 +0000 (0:00:00.682) 0:07:05.773 ********** 2025-05-17 00:58:37.306579 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 00:58:37.306583 | orchestrator | 2025-05-17 00:58:37.306587 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-17 00:58:37.306591 | orchestrator | Saturday 17 May 2025 00:52:54 +0000 (0:00:00.756) 0:07:06.530 ********** 2025-05-17 00:58:37.306595 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.306599 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.306603 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.306607 | orchestrator | 2025-05-17 00:58:37.306611 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-17 00:58:37.306616 | orchestrator | Saturday 17 May 2025 00:52:54 +0000 (0:00:00.350) 0:07:06.880 ********** 2025-05-17 00:58:37.306620 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.306624 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.306628 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.306632 | orchestrator | 2025-05-17 00:58:37.306636 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-17 00:58:37.306640 | orchestrator | Saturday 17 May 2025 00:52:55 +0000 (0:00:01.173) 0:07:08.054 ********** 2025-05-17 00:58:37.306644 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-17 00:58:37.306648 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-17 00:58:37.306652 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-17 00:58:37.306656 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.306661 | orchestrator | 2025-05-17 00:58:37.306665 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-17 00:58:37.306669 | orchestrator | Saturday 17 May 2025 00:52:56 +0000 (0:00:01.153) 0:07:09.207 ********** 2025-05-17 00:58:37.306685 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.306690 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.306694 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.306698 | orchestrator | 2025-05-17 00:58:37.306702 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-17 00:58:37.306706 | orchestrator | Saturday 17 May 2025 00:52:57 +0000 (0:00:00.343) 0:07:09.551 ********** 2025-05-17 00:58:37.306710 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.306714 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.306718 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.306722 | orchestrator | 2025-05-17 00:58:37.306726 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-17 00:58:37.306730 | orchestrator | 2025-05-17 00:58:37.306734 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-17 00:58:37.306738 | orchestrator | Saturday 17 May 2025 00:52:59 +0000 (0:00:02.192) 0:07:11.744 ********** 2025-05-17 00:58:37.306745 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.306749 | orchestrator | 2025-05-17 00:58:37.306753 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-17 00:58:37.306758 | orchestrator | Saturday 17 May 2025 00:53:00 +0000 (0:00:00.715) 0:07:12.459 ********** 2025-05-17 00:58:37.306762 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.306768 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.306774 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.306780 | orchestrator | 2025-05-17 00:58:37.306787 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-17 00:58:37.306794 | orchestrator | Saturday 17 May 2025 00:53:00 +0000 (0:00:00.321) 0:07:12.781 ********** 2025-05-17 00:58:37.306806 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.306812 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.306819 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.306825 | orchestrator | 2025-05-17 00:58:37.306830 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-17 00:58:37.306834 | orchestrator | Saturday 17 May 2025 00:53:01 +0000 (0:00:00.666) 0:07:13.448 ********** 2025-05-17 00:58:37.306838 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.306842 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.306847 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.306851 | orchestrator | 2025-05-17 00:58:37.306855 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-17 00:58:37.306859 | orchestrator | Saturday 17 May 2025 00:53:02 +0000 (0:00:01.030) 0:07:14.478 ********** 2025-05-17 00:58:37.306863 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.306867 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.306871 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.306875 | orchestrator | 2025-05-17 00:58:37.306879 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-17 00:58:37.306883 | orchestrator | Saturday 17 May 2025 00:53:02 +0000 (0:00:00.727) 0:07:15.206 ********** 2025-05-17 00:58:37.306887 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.306891 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.306896 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.306900 | orchestrator | 2025-05-17 00:58:37.306904 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-17 00:58:37.306908 | orchestrator | Saturday 17 May 2025 00:53:03 +0000 (0:00:00.340) 0:07:15.547 ********** 2025-05-17 00:58:37.306912 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.306916 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.306932 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.306937 | orchestrator | 2025-05-17 00:58:37.306941 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-17 00:58:37.306946 | orchestrator | Saturday 17 May 2025 00:53:03 +0000 (0:00:00.302) 0:07:15.849 ********** 2025-05-17 00:58:37.306950 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.306954 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.306958 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.306962 | orchestrator | 2025-05-17 00:58:37.306966 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-17 00:58:37.306970 | orchestrator | Saturday 17 May 2025 00:53:04 +0000 (0:00:00.565) 0:07:16.415 ********** 2025-05-17 00:58:37.306974 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.306978 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.306982 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.306986 | orchestrator | 2025-05-17 00:58:37.306990 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-17 00:58:37.306995 | orchestrator | Saturday 17 May 2025 00:53:04 +0000 (0:00:00.341) 0:07:16.756 ********** 2025-05-17 00:58:37.306999 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307003 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307007 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307011 | orchestrator | 2025-05-17 00:58:37.307015 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-17 00:58:37.307019 | orchestrator | Saturday 17 May 2025 00:53:04 +0000 (0:00:00.291) 0:07:17.048 ********** 2025-05-17 00:58:37.307023 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307027 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307031 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307035 | orchestrator | 2025-05-17 00:58:37.307039 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-17 00:58:37.307043 | orchestrator | Saturday 17 May 2025 00:53:05 +0000 (0:00:00.310) 0:07:17.359 ********** 2025-05-17 00:58:37.307048 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.307056 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.307060 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.307064 | orchestrator | 2025-05-17 00:58:37.307068 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-17 00:58:37.307073 | orchestrator | Saturday 17 May 2025 00:53:06 +0000 (0:00:01.027) 0:07:18.386 ********** 2025-05-17 00:58:37.307077 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307081 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307085 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307089 | orchestrator | 2025-05-17 00:58:37.307093 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-17 00:58:37.307113 | orchestrator | Saturday 17 May 2025 00:53:06 +0000 (0:00:00.329) 0:07:18.716 ********** 2025-05-17 00:58:37.307118 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307122 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307126 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307130 | orchestrator | 2025-05-17 00:58:37.307134 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-17 00:58:37.307139 | orchestrator | Saturday 17 May 2025 00:53:06 +0000 (0:00:00.307) 0:07:19.024 ********** 2025-05-17 00:58:37.307143 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.307147 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.307151 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.307155 | orchestrator | 2025-05-17 00:58:37.307159 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-17 00:58:37.307163 | orchestrator | Saturday 17 May 2025 00:53:07 +0000 (0:00:00.357) 0:07:19.381 ********** 2025-05-17 00:58:37.307167 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.307171 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.307176 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.307182 | orchestrator | 2025-05-17 00:58:37.307187 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-17 00:58:37.307191 | orchestrator | Saturday 17 May 2025 00:53:07 +0000 (0:00:00.649) 0:07:20.030 ********** 2025-05-17 00:58:37.307195 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.307199 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.307203 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.307207 | orchestrator | 2025-05-17 00:58:37.307211 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-17 00:58:37.307215 | orchestrator | Saturday 17 May 2025 00:53:08 +0000 (0:00:00.330) 0:07:20.361 ********** 2025-05-17 00:58:37.307219 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307224 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307228 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307232 | orchestrator | 2025-05-17 00:58:37.307236 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-17 00:58:37.307240 | orchestrator | Saturday 17 May 2025 00:53:08 +0000 (0:00:00.363) 0:07:20.724 ********** 2025-05-17 00:58:37.307244 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307248 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307252 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307256 | orchestrator | 2025-05-17 00:58:37.307260 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-17 00:58:37.307264 | orchestrator | Saturday 17 May 2025 00:53:08 +0000 (0:00:00.317) 0:07:21.042 ********** 2025-05-17 00:58:37.307268 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307272 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307276 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307280 | orchestrator | 2025-05-17 00:58:37.307284 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-17 00:58:37.307288 | orchestrator | Saturday 17 May 2025 00:53:09 +0000 (0:00:00.572) 0:07:21.615 ********** 2025-05-17 00:58:37.307293 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.307297 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.307304 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.307308 | orchestrator | 2025-05-17 00:58:37.307312 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-17 00:58:37.307316 | orchestrator | Saturday 17 May 2025 00:53:09 +0000 (0:00:00.399) 0:07:22.014 ********** 2025-05-17 00:58:37.307320 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307324 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307328 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307332 | orchestrator | 2025-05-17 00:58:37.307336 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-17 00:58:37.307340 | orchestrator | Saturday 17 May 2025 00:53:10 +0000 (0:00:00.327) 0:07:22.342 ********** 2025-05-17 00:58:37.307344 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307349 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307353 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307357 | orchestrator | 2025-05-17 00:58:37.307361 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-17 00:58:37.307365 | orchestrator | Saturday 17 May 2025 00:53:10 +0000 (0:00:00.324) 0:07:22.666 ********** 2025-05-17 00:58:37.307369 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307373 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307377 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307381 | orchestrator | 2025-05-17 00:58:37.307385 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-17 00:58:37.307389 | orchestrator | Saturday 17 May 2025 00:53:11 +0000 (0:00:00.651) 0:07:23.317 ********** 2025-05-17 00:58:37.307393 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307397 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307401 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307405 | orchestrator | 2025-05-17 00:58:37.307409 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-17 00:58:37.307413 | orchestrator | Saturday 17 May 2025 00:53:11 +0000 (0:00:00.428) 0:07:23.746 ********** 2025-05-17 00:58:37.307418 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307422 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307426 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307430 | orchestrator | 2025-05-17 00:58:37.307434 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-17 00:58:37.307438 | orchestrator | Saturday 17 May 2025 00:53:11 +0000 (0:00:00.331) 0:07:24.077 ********** 2025-05-17 00:58:37.307442 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307446 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307450 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307454 | orchestrator | 2025-05-17 00:58:37.307458 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-17 00:58:37.307462 | orchestrator | Saturday 17 May 2025 00:53:12 +0000 (0:00:00.304) 0:07:24.382 ********** 2025-05-17 00:58:37.307466 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307470 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307474 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307478 | orchestrator | 2025-05-17 00:58:37.307482 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-17 00:58:37.307498 | orchestrator | Saturday 17 May 2025 00:53:12 +0000 (0:00:00.651) 0:07:25.034 ********** 2025-05-17 00:58:37.307502 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307507 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307511 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307515 | orchestrator | 2025-05-17 00:58:37.307519 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-17 00:58:37.307523 | orchestrator | Saturday 17 May 2025 00:53:13 +0000 (0:00:00.320) 0:07:25.354 ********** 2025-05-17 00:58:37.307527 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307531 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307538 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307542 | orchestrator | 2025-05-17 00:58:37.307546 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-17 00:58:37.307551 | orchestrator | Saturday 17 May 2025 00:53:13 +0000 (0:00:00.334) 0:07:25.688 ********** 2025-05-17 00:58:37.307557 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307561 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307566 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307570 | orchestrator | 2025-05-17 00:58:37.307574 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-17 00:58:37.307578 | orchestrator | Saturday 17 May 2025 00:53:13 +0000 (0:00:00.331) 0:07:26.020 ********** 2025-05-17 00:58:37.307582 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307586 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307590 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307594 | orchestrator | 2025-05-17 00:58:37.307598 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-17 00:58:37.307602 | orchestrator | Saturday 17 May 2025 00:53:14 +0000 (0:00:00.633) 0:07:26.653 ********** 2025-05-17 00:58:37.307607 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307611 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307615 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307619 | orchestrator | 2025-05-17 00:58:37.307623 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-17 00:58:37.307627 | orchestrator | Saturday 17 May 2025 00:53:14 +0000 (0:00:00.346) 0:07:27.000 ********** 2025-05-17 00:58:37.307631 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-17 00:58:37.307635 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-17 00:58:37.307639 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307643 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-17 00:58:37.307647 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-17 00:58:37.307651 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307655 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-17 00:58:37.307660 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-17 00:58:37.307664 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307668 | orchestrator | 2025-05-17 00:58:37.307672 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-17 00:58:37.307676 | orchestrator | Saturday 17 May 2025 00:53:15 +0000 (0:00:00.355) 0:07:27.355 ********** 2025-05-17 00:58:37.307680 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-17 00:58:37.307684 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-17 00:58:37.307688 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307692 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-17 00:58:37.307696 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-17 00:58:37.307700 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307704 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-17 00:58:37.307708 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-17 00:58:37.307712 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307716 | orchestrator | 2025-05-17 00:58:37.307720 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-17 00:58:37.307724 | orchestrator | Saturday 17 May 2025 00:53:15 +0000 (0:00:00.375) 0:07:27.730 ********** 2025-05-17 00:58:37.307728 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307732 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307736 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307740 | orchestrator | 2025-05-17 00:58:37.307744 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-17 00:58:37.307748 | orchestrator | Saturday 17 May 2025 00:53:16 +0000 (0:00:00.610) 0:07:28.341 ********** 2025-05-17 00:58:37.307755 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307759 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307763 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307767 | orchestrator | 2025-05-17 00:58:37.307771 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-17 00:58:37.307776 | orchestrator | Saturday 17 May 2025 00:53:16 +0000 (0:00:00.343) 0:07:28.684 ********** 2025-05-17 00:58:37.307780 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307784 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307788 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307792 | orchestrator | 2025-05-17 00:58:37.307796 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-17 00:58:37.307800 | orchestrator | Saturday 17 May 2025 00:53:16 +0000 (0:00:00.351) 0:07:29.036 ********** 2025-05-17 00:58:37.307804 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307808 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307812 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307816 | orchestrator | 2025-05-17 00:58:37.307820 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-17 00:58:37.307824 | orchestrator | Saturday 17 May 2025 00:53:17 +0000 (0:00:00.307) 0:07:29.344 ********** 2025-05-17 00:58:37.307828 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307832 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307836 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307840 | orchestrator | 2025-05-17 00:58:37.307856 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-17 00:58:37.307860 | orchestrator | Saturday 17 May 2025 00:53:17 +0000 (0:00:00.603) 0:07:29.947 ********** 2025-05-17 00:58:37.307865 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307869 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.307872 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.307876 | orchestrator | 2025-05-17 00:58:37.307880 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-17 00:58:37.307884 | orchestrator | Saturday 17 May 2025 00:53:18 +0000 (0:00:00.351) 0:07:30.299 ********** 2025-05-17 00:58:37.307889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.307893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.307897 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.307901 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307905 | orchestrator | 2025-05-17 00:58:37.307911 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-17 00:58:37.307916 | orchestrator | Saturday 17 May 2025 00:53:18 +0000 (0:00:00.418) 0:07:30.718 ********** 2025-05-17 00:58:37.307932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.307939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.307946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.307952 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307958 | orchestrator | 2025-05-17 00:58:37.307964 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-17 00:58:37.307970 | orchestrator | Saturday 17 May 2025 00:53:18 +0000 (0:00:00.417) 0:07:31.135 ********** 2025-05-17 00:58:37.307975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.307979 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.307983 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.307987 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.307991 | orchestrator | 2025-05-17 00:58:37.307995 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.307999 | orchestrator | Saturday 17 May 2025 00:53:19 +0000 (0:00:00.447) 0:07:31.583 ********** 2025-05-17 00:58:37.308008 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308013 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308017 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308021 | orchestrator | 2025-05-17 00:58:37.308025 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-17 00:58:37.308029 | orchestrator | Saturday 17 May 2025 00:53:19 +0000 (0:00:00.328) 0:07:31.912 ********** 2025-05-17 00:58:37.308032 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-17 00:58:37.308037 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308041 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-17 00:58:37.308045 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308049 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-17 00:58:37.308053 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308057 | orchestrator | 2025-05-17 00:58:37.308061 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-17 00:58:37.308065 | orchestrator | Saturday 17 May 2025 00:53:20 +0000 (0:00:00.785) 0:07:32.697 ********** 2025-05-17 00:58:37.308069 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308073 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308077 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308081 | orchestrator | 2025-05-17 00:58:37.308085 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.308089 | orchestrator | Saturday 17 May 2025 00:53:20 +0000 (0:00:00.368) 0:07:33.066 ********** 2025-05-17 00:58:37.308093 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308097 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308101 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308105 | orchestrator | 2025-05-17 00:58:37.308109 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-17 00:58:37.308113 | orchestrator | Saturday 17 May 2025 00:53:21 +0000 (0:00:00.319) 0:07:33.385 ********** 2025-05-17 00:58:37.308117 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-17 00:58:37.308121 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308125 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-17 00:58:37.308129 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308133 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-17 00:58:37.308137 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308141 | orchestrator | 2025-05-17 00:58:37.308145 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-17 00:58:37.308149 | orchestrator | Saturday 17 May 2025 00:53:21 +0000 (0:00:00.483) 0:07:33.869 ********** 2025-05-17 00:58:37.308154 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.308158 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308162 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.308166 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308170 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.308174 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308178 | orchestrator | 2025-05-17 00:58:37.308182 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-17 00:58:37.308186 | orchestrator | Saturday 17 May 2025 00:53:22 +0000 (0:00:00.632) 0:07:34.501 ********** 2025-05-17 00:58:37.308190 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.308208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.308213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.308217 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-17 00:58:37.308225 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-17 00:58:37.308229 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-17 00:58:37.308233 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308237 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308241 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-17 00:58:37.308245 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-17 00:58:37.308249 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-17 00:58:37.308253 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308257 | orchestrator | 2025-05-17 00:58:37.308261 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-17 00:58:37.308268 | orchestrator | Saturday 17 May 2025 00:53:22 +0000 (0:00:00.658) 0:07:35.160 ********** 2025-05-17 00:58:37.308272 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308277 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308281 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308285 | orchestrator | 2025-05-17 00:58:37.308289 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-17 00:58:37.308293 | orchestrator | Saturday 17 May 2025 00:53:23 +0000 (0:00:00.806) 0:07:35.966 ********** 2025-05-17 00:58:37.308297 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-17 00:58:37.308301 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308305 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-17 00:58:37.308309 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308313 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-17 00:58:37.308317 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308321 | orchestrator | 2025-05-17 00:58:37.308325 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-17 00:58:37.308329 | orchestrator | Saturday 17 May 2025 00:53:24 +0000 (0:00:00.540) 0:07:36.507 ********** 2025-05-17 00:58:37.308334 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308338 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308342 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308346 | orchestrator | 2025-05-17 00:58:37.308350 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-17 00:58:37.308354 | orchestrator | Saturday 17 May 2025 00:53:24 +0000 (0:00:00.614) 0:07:37.122 ********** 2025-05-17 00:58:37.308358 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308362 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308366 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308370 | orchestrator | 2025-05-17 00:58:37.308374 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-05-17 00:58:37.308378 | orchestrator | Saturday 17 May 2025 00:53:25 +0000 (0:00:00.478) 0:07:37.600 ********** 2025-05-17 00:58:37.308382 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.308387 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.308391 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.308395 | orchestrator | 2025-05-17 00:58:37.308399 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-05-17 00:58:37.308403 | orchestrator | Saturday 17 May 2025 00:53:25 +0000 (0:00:00.268) 0:07:37.869 ********** 2025-05-17 00:58:37.308407 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-17 00:58:37.308411 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-17 00:58:37.308415 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-17 00:58:37.308419 | orchestrator | 2025-05-17 00:58:37.308423 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-05-17 00:58:37.308427 | orchestrator | Saturday 17 May 2025 00:53:26 +0000 (0:00:00.920) 0:07:38.789 ********** 2025-05-17 00:58:37.308431 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.308439 | orchestrator | 2025-05-17 00:58:37.308443 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-05-17 00:58:37.308447 | orchestrator | Saturday 17 May 2025 00:53:26 +0000 (0:00:00.458) 0:07:39.248 ********** 2025-05-17 00:58:37.308451 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308455 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308459 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308463 | orchestrator | 2025-05-17 00:58:37.308467 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-05-17 00:58:37.308471 | orchestrator | Saturday 17 May 2025 00:53:27 +0000 (0:00:00.256) 0:07:39.505 ********** 2025-05-17 00:58:37.308475 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308479 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308484 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308488 | orchestrator | 2025-05-17 00:58:37.308492 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-05-17 00:58:37.308496 | orchestrator | Saturday 17 May 2025 00:53:27 +0000 (0:00:00.420) 0:07:39.926 ********** 2025-05-17 00:58:37.308500 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308504 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308508 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308512 | orchestrator | 2025-05-17 00:58:37.308516 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-05-17 00:58:37.308520 | orchestrator | Saturday 17 May 2025 00:53:27 +0000 (0:00:00.281) 0:07:40.208 ********** 2025-05-17 00:58:37.308524 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308528 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308532 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308536 | orchestrator | 2025-05-17 00:58:37.308540 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-05-17 00:58:37.308544 | orchestrator | Saturday 17 May 2025 00:53:28 +0000 (0:00:00.282) 0:07:40.490 ********** 2025-05-17 00:58:37.308560 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.308564 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.308569 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.308573 | orchestrator | 2025-05-17 00:58:37.308577 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-05-17 00:58:37.308581 | orchestrator | Saturday 17 May 2025 00:53:28 +0000 (0:00:00.557) 0:07:41.047 ********** 2025-05-17 00:58:37.308585 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.308589 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.308593 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.308597 | orchestrator | 2025-05-17 00:58:37.308601 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-05-17 00:58:37.308605 | orchestrator | Saturday 17 May 2025 00:53:29 +0000 (0:00:00.595) 0:07:41.643 ********** 2025-05-17 00:58:37.308609 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-17 00:58:37.308616 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-17 00:58:37.308620 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-17 00:58:37.308625 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-17 00:58:37.308629 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-17 00:58:37.308633 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-17 00:58:37.308637 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-17 00:58:37.308641 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-17 00:58:37.308645 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-17 00:58:37.308652 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-17 00:58:37.308656 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-17 00:58:37.308660 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-17 00:58:37.308664 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-17 00:58:37.308668 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-17 00:58:37.308672 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-17 00:58:37.308676 | orchestrator | 2025-05-17 00:58:37.308680 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-05-17 00:58:37.308684 | orchestrator | Saturday 17 May 2025 00:53:32 +0000 (0:00:03.052) 0:07:44.695 ********** 2025-05-17 00:58:37.308688 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308693 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308697 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308701 | orchestrator | 2025-05-17 00:58:37.308708 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-05-17 00:58:37.308712 | orchestrator | Saturday 17 May 2025 00:53:32 +0000 (0:00:00.337) 0:07:45.033 ********** 2025-05-17 00:58:37.308716 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.308720 | orchestrator | 2025-05-17 00:58:37.308724 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-05-17 00:58:37.308728 | orchestrator | Saturday 17 May 2025 00:53:33 +0000 (0:00:00.790) 0:07:45.824 ********** 2025-05-17 00:58:37.308732 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-17 00:58:37.308736 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-17 00:58:37.308740 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-17 00:58:37.308744 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-17 00:58:37.308748 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-17 00:58:37.308752 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-17 00:58:37.308756 | orchestrator | 2025-05-17 00:58:37.308760 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-05-17 00:58:37.308764 | orchestrator | Saturday 17 May 2025 00:53:34 +0000 (0:00:00.999) 0:07:46.823 ********** 2025-05-17 00:58:37.308769 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 00:58:37.308773 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-17 00:58:37.308777 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-17 00:58:37.308781 | orchestrator | 2025-05-17 00:58:37.308785 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-05-17 00:58:37.308789 | orchestrator | Saturday 17 May 2025 00:53:36 +0000 (0:00:01.773) 0:07:48.597 ********** 2025-05-17 00:58:37.308793 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-17 00:58:37.308797 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-17 00:58:37.308801 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.308805 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-17 00:58:37.308809 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-17 00:58:37.308813 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-17 00:58:37.308817 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.308821 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-17 00:58:37.308826 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.308830 | orchestrator | 2025-05-17 00:58:37.308834 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-05-17 00:58:37.308849 | orchestrator | Saturday 17 May 2025 00:53:37 +0000 (0:00:01.280) 0:07:49.878 ********** 2025-05-17 00:58:37.308857 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-17 00:58:37.308861 | orchestrator | 2025-05-17 00:58:37.308866 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-05-17 00:58:37.308870 | orchestrator | Saturday 17 May 2025 00:53:40 +0000 (0:00:02.568) 0:07:52.446 ********** 2025-05-17 00:58:37.308874 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.308878 | orchestrator | 2025-05-17 00:58:37.308882 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-05-17 00:58:37.308886 | orchestrator | Saturday 17 May 2025 00:53:40 +0000 (0:00:00.568) 0:07:53.015 ********** 2025-05-17 00:58:37.308890 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308897 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308901 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308905 | orchestrator | 2025-05-17 00:58:37.308909 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-05-17 00:58:37.308913 | orchestrator | Saturday 17 May 2025 00:53:41 +0000 (0:00:00.554) 0:07:53.569 ********** 2025-05-17 00:58:37.308917 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308953 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308957 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308962 | orchestrator | 2025-05-17 00:58:37.308966 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-05-17 00:58:37.308970 | orchestrator | Saturday 17 May 2025 00:53:41 +0000 (0:00:00.309) 0:07:53.879 ********** 2025-05-17 00:58:37.308974 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.308978 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.308982 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.308986 | orchestrator | 2025-05-17 00:58:37.308990 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-05-17 00:58:37.308994 | orchestrator | Saturday 17 May 2025 00:53:41 +0000 (0:00:00.311) 0:07:54.191 ********** 2025-05-17 00:58:37.308998 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.309003 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.309007 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.309011 | orchestrator | 2025-05-17 00:58:37.309015 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-05-17 00:58:37.309019 | orchestrator | Saturday 17 May 2025 00:53:42 +0000 (0:00:00.296) 0:07:54.487 ********** 2025-05-17 00:58:37.309023 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.309027 | orchestrator | 2025-05-17 00:58:37.309031 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-05-17 00:58:37.309035 | orchestrator | Saturday 17 May 2025 00:53:43 +0000 (0:00:00.818) 0:07:55.306 ********** 2025-05-17 00:58:37.309039 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a83a275b-240b-53eb-892d-9c3e23ab252d', 'data_vg': 'ceph-a83a275b-240b-53eb-892d-9c3e23ab252d'}) 2025-05-17 00:58:37.309044 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-93bb0954-6685-5c67-a7e0-a3574f092206', 'data_vg': 'ceph-93bb0954-6685-5c67-a7e0-a3574f092206'}) 2025-05-17 00:58:37.309048 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7dd92559-5dfb-56e9-86ff-64c31a268c5e', 'data_vg': 'ceph-7dd92559-5dfb-56e9-86ff-64c31a268c5e'}) 2025-05-17 00:58:37.309052 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b4d5f2e3-0e32-57e8-8b55-58d04db15593', 'data_vg': 'ceph-b4d5f2e3-0e32-57e8-8b55-58d04db15593'}) 2025-05-17 00:58:37.309056 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e21dde7b-e402-5316-8511-fd8df0cc7e38', 'data_vg': 'ceph-e21dde7b-e402-5316-8511-fd8df0cc7e38'}) 2025-05-17 00:58:37.309061 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-25c991a6-e724-5c1a-b659-154410c60242', 'data_vg': 'ceph-25c991a6-e724-5c1a-b659-154410c60242'}) 2025-05-17 00:58:37.309068 | orchestrator | 2025-05-17 00:58:37.309072 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-17 00:58:37.309076 | orchestrator | Saturday 17 May 2025 00:54:23 +0000 (0:00:40.146) 0:08:35.453 ********** 2025-05-17 00:58:37.309080 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309084 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.309088 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.309092 | orchestrator | 2025-05-17 00:58:37.309096 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-05-17 00:58:37.309100 | orchestrator | Saturday 17 May 2025 00:54:23 +0000 (0:00:00.489) 0:08:35.942 ********** 2025-05-17 00:58:37.309104 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.309108 | orchestrator | 2025-05-17 00:58:37.309112 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-05-17 00:58:37.309116 | orchestrator | Saturday 17 May 2025 00:54:24 +0000 (0:00:00.539) 0:08:36.481 ********** 2025-05-17 00:58:37.309120 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.309125 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.309129 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.309133 | orchestrator | 2025-05-17 00:58:37.309137 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-05-17 00:58:37.309141 | orchestrator | Saturday 17 May 2025 00:54:24 +0000 (0:00:00.647) 0:08:37.128 ********** 2025-05-17 00:58:37.309145 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.309162 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.309167 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.309171 | orchestrator | 2025-05-17 00:58:37.309175 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-05-17 00:58:37.309179 | orchestrator | Saturday 17 May 2025 00:54:26 +0000 (0:00:01.967) 0:08:39.096 ********** 2025-05-17 00:58:37.309183 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.309187 | orchestrator | 2025-05-17 00:58:37.309192 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-05-17 00:58:37.309196 | orchestrator | Saturday 17 May 2025 00:54:27 +0000 (0:00:00.570) 0:08:39.667 ********** 2025-05-17 00:58:37.309200 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.309204 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.309208 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.309212 | orchestrator | 2025-05-17 00:58:37.309216 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-05-17 00:58:37.309223 | orchestrator | Saturday 17 May 2025 00:54:28 +0000 (0:00:01.412) 0:08:41.080 ********** 2025-05-17 00:58:37.309227 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.309231 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.309235 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.309239 | orchestrator | 2025-05-17 00:58:37.309243 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-05-17 00:58:37.309247 | orchestrator | Saturday 17 May 2025 00:54:29 +0000 (0:00:01.185) 0:08:42.266 ********** 2025-05-17 00:58:37.309251 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.309255 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.309259 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.309263 | orchestrator | 2025-05-17 00:58:37.309267 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-05-17 00:58:37.309271 | orchestrator | Saturday 17 May 2025 00:54:31 +0000 (0:00:01.705) 0:08:43.971 ********** 2025-05-17 00:58:37.309275 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309280 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.309284 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.309288 | orchestrator | 2025-05-17 00:58:37.309294 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-05-17 00:58:37.309298 | orchestrator | Saturday 17 May 2025 00:54:32 +0000 (0:00:00.326) 0:08:44.298 ********** 2025-05-17 00:58:37.309302 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309306 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.309309 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.309313 | orchestrator | 2025-05-17 00:58:37.309317 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-05-17 00:58:37.309320 | orchestrator | Saturday 17 May 2025 00:54:32 +0000 (0:00:00.535) 0:08:44.833 ********** 2025-05-17 00:58:37.309324 | orchestrator | ok: [testbed-node-3] => (item=1) 2025-05-17 00:58:37.309328 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-17 00:58:37.309332 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-05-17 00:58:37.309335 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-05-17 00:58:37.309339 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-05-17 00:58:37.309343 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-05-17 00:58:37.309347 | orchestrator | 2025-05-17 00:58:37.309350 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-05-17 00:58:37.309354 | orchestrator | Saturday 17 May 2025 00:54:33 +0000 (0:00:01.068) 0:08:45.902 ********** 2025-05-17 00:58:37.309358 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-05-17 00:58:37.309361 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-05-17 00:58:37.309365 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-17 00:58:37.309369 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-05-17 00:58:37.309372 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-05-17 00:58:37.309376 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-05-17 00:58:37.309380 | orchestrator | 2025-05-17 00:58:37.309384 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-05-17 00:58:37.309387 | orchestrator | Saturday 17 May 2025 00:54:37 +0000 (0:00:03.503) 0:08:49.406 ********** 2025-05-17 00:58:37.309391 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309395 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.309398 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-17 00:58:37.309402 | orchestrator | 2025-05-17 00:58:37.309406 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-05-17 00:58:37.309409 | orchestrator | Saturday 17 May 2025 00:54:40 +0000 (0:00:03.049) 0:08:52.456 ********** 2025-05-17 00:58:37.309413 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309417 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.309421 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-05-17 00:58:37.309424 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-17 00:58:37.309428 | orchestrator | 2025-05-17 00:58:37.309432 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-05-17 00:58:37.309435 | orchestrator | Saturday 17 May 2025 00:54:52 +0000 (0:00:12.444) 0:09:04.900 ********** 2025-05-17 00:58:37.309439 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309443 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.309447 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.309450 | orchestrator | 2025-05-17 00:58:37.309454 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-05-17 00:58:37.309458 | orchestrator | Saturday 17 May 2025 00:54:53 +0000 (0:00:00.474) 0:09:05.374 ********** 2025-05-17 00:58:37.309461 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309465 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.309469 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.309472 | orchestrator | 2025-05-17 00:58:37.309476 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-17 00:58:37.309480 | orchestrator | Saturday 17 May 2025 00:54:54 +0000 (0:00:01.077) 0:09:06.452 ********** 2025-05-17 00:58:37.309484 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.309500 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.309505 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.309509 | orchestrator | 2025-05-17 00:58:37.309512 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-17 00:58:37.309516 | orchestrator | Saturday 17 May 2025 00:54:55 +0000 (0:00:00.867) 0:09:07.319 ********** 2025-05-17 00:58:37.309520 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.309523 | orchestrator | 2025-05-17 00:58:37.309527 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-05-17 00:58:37.309531 | orchestrator | Saturday 17 May 2025 00:54:55 +0000 (0:00:00.520) 0:09:07.839 ********** 2025-05-17 00:58:37.309534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.309538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.309542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.309548 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309552 | orchestrator | 2025-05-17 00:58:37.309555 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-05-17 00:58:37.309559 | orchestrator | Saturday 17 May 2025 00:54:55 +0000 (0:00:00.428) 0:09:08.267 ********** 2025-05-17 00:58:37.309563 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309567 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.309570 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.309574 | orchestrator | 2025-05-17 00:58:37.309578 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-05-17 00:58:37.309581 | orchestrator | Saturday 17 May 2025 00:54:56 +0000 (0:00:00.305) 0:09:08.573 ********** 2025-05-17 00:58:37.309585 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309589 | orchestrator | 2025-05-17 00:58:37.309592 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-05-17 00:58:37.309596 | orchestrator | Saturday 17 May 2025 00:54:56 +0000 (0:00:00.252) 0:09:08.826 ********** 2025-05-17 00:58:37.309600 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309604 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.309607 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.309611 | orchestrator | 2025-05-17 00:58:37.309615 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-05-17 00:58:37.309618 | orchestrator | Saturday 17 May 2025 00:54:57 +0000 (0:00:00.541) 0:09:09.368 ********** 2025-05-17 00:58:37.309622 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309626 | orchestrator | 2025-05-17 00:58:37.309629 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-05-17 00:58:37.309633 | orchestrator | Saturday 17 May 2025 00:54:57 +0000 (0:00:00.254) 0:09:09.622 ********** 2025-05-17 00:58:37.309637 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309640 | orchestrator | 2025-05-17 00:58:37.309644 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-17 00:58:37.309648 | orchestrator | Saturday 17 May 2025 00:54:57 +0000 (0:00:00.244) 0:09:09.867 ********** 2025-05-17 00:58:37.309652 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309655 | orchestrator | 2025-05-17 00:58:37.309659 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-05-17 00:58:37.309663 | orchestrator | Saturday 17 May 2025 00:54:57 +0000 (0:00:00.119) 0:09:09.987 ********** 2025-05-17 00:58:37.309666 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309670 | orchestrator | 2025-05-17 00:58:37.309674 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-05-17 00:58:37.309677 | orchestrator | Saturday 17 May 2025 00:54:57 +0000 (0:00:00.219) 0:09:10.206 ********** 2025-05-17 00:58:37.309681 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309685 | orchestrator | 2025-05-17 00:58:37.309688 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-05-17 00:58:37.309695 | orchestrator | Saturday 17 May 2025 00:54:58 +0000 (0:00:00.261) 0:09:10.467 ********** 2025-05-17 00:58:37.309699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.309703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.309706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.309710 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309714 | orchestrator | 2025-05-17 00:58:37.309718 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-05-17 00:58:37.309721 | orchestrator | Saturday 17 May 2025 00:54:58 +0000 (0:00:00.415) 0:09:10.883 ********** 2025-05-17 00:58:37.309725 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309729 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.309732 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.309736 | orchestrator | 2025-05-17 00:58:37.309740 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-05-17 00:58:37.309743 | orchestrator | Saturday 17 May 2025 00:54:59 +0000 (0:00:00.769) 0:09:11.653 ********** 2025-05-17 00:58:37.309747 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309751 | orchestrator | 2025-05-17 00:58:37.309755 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-05-17 00:58:37.309758 | orchestrator | Saturday 17 May 2025 00:54:59 +0000 (0:00:00.346) 0:09:12.000 ********** 2025-05-17 00:58:37.309762 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309766 | orchestrator | 2025-05-17 00:58:37.309769 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-17 00:58:37.309773 | orchestrator | Saturday 17 May 2025 00:54:59 +0000 (0:00:00.279) 0:09:12.279 ********** 2025-05-17 00:58:37.309777 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.309780 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.309784 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.309788 | orchestrator | 2025-05-17 00:58:37.309791 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-17 00:58:37.309795 | orchestrator | 2025-05-17 00:58:37.309799 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-17 00:58:37.309802 | orchestrator | Saturday 17 May 2025 00:55:02 +0000 (0:00:02.943) 0:09:15.223 ********** 2025-05-17 00:58:37.309816 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.309821 | orchestrator | 2025-05-17 00:58:37.309825 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-17 00:58:37.309829 | orchestrator | Saturday 17 May 2025 00:55:03 +0000 (0:00:01.036) 0:09:16.259 ********** 2025-05-17 00:58:37.309832 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309836 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.309840 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.309844 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.309848 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.309851 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.309855 | orchestrator | 2025-05-17 00:58:37.309859 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-17 00:58:37.309865 | orchestrator | Saturday 17 May 2025 00:55:04 +0000 (0:00:00.651) 0:09:16.911 ********** 2025-05-17 00:58:37.309869 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.309873 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.309876 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.309880 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.309884 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.309887 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.309891 | orchestrator | 2025-05-17 00:58:37.309895 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-17 00:58:37.309899 | orchestrator | Saturday 17 May 2025 00:55:05 +0000 (0:00:01.074) 0:09:17.985 ********** 2025-05-17 00:58:37.309907 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.309911 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.309914 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.309918 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.309936 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.309940 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.309944 | orchestrator | 2025-05-17 00:58:37.309948 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-17 00:58:37.309951 | orchestrator | Saturday 17 May 2025 00:55:06 +0000 (0:00:01.198) 0:09:19.184 ********** 2025-05-17 00:58:37.309955 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.309958 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.309962 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.309966 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.309969 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.309973 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.309977 | orchestrator | 2025-05-17 00:58:37.309980 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-17 00:58:37.309984 | orchestrator | Saturday 17 May 2025 00:55:07 +0000 (0:00:00.991) 0:09:20.175 ********** 2025-05-17 00:58:37.309988 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.309991 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.309995 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.309999 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310003 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.310006 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.310010 | orchestrator | 2025-05-17 00:58:37.310029 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-17 00:58:37.310034 | orchestrator | Saturday 17 May 2025 00:55:08 +0000 (0:00:00.889) 0:09:21.065 ********** 2025-05-17 00:58:37.310038 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310041 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310045 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310049 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310052 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310056 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310060 | orchestrator | 2025-05-17 00:58:37.310063 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-17 00:58:37.310067 | orchestrator | Saturday 17 May 2025 00:55:09 +0000 (0:00:00.625) 0:09:21.690 ********** 2025-05-17 00:58:37.310071 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310076 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310081 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310087 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310090 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310094 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310098 | orchestrator | 2025-05-17 00:58:37.310101 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-17 00:58:37.310105 | orchestrator | Saturday 17 May 2025 00:55:10 +0000 (0:00:00.877) 0:09:22.567 ********** 2025-05-17 00:58:37.310109 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310113 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310116 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310120 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310124 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310127 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310131 | orchestrator | 2025-05-17 00:58:37.310135 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-17 00:58:37.310138 | orchestrator | Saturday 17 May 2025 00:55:10 +0000 (0:00:00.631) 0:09:23.199 ********** 2025-05-17 00:58:37.310142 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310146 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310149 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310158 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310161 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310165 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310169 | orchestrator | 2025-05-17 00:58:37.310172 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-17 00:58:37.310176 | orchestrator | Saturday 17 May 2025 00:55:11 +0000 (0:00:00.893) 0:09:24.093 ********** 2025-05-17 00:58:37.310180 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310183 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310187 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310191 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310194 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310206 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310210 | orchestrator | 2025-05-17 00:58:37.310214 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-17 00:58:37.310231 | orchestrator | Saturday 17 May 2025 00:55:12 +0000 (0:00:00.649) 0:09:24.743 ********** 2025-05-17 00:58:37.310236 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.310239 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.310243 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.310247 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.310257 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.310261 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.310265 | orchestrator | 2025-05-17 00:58:37.310268 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-17 00:58:37.310272 | orchestrator | Saturday 17 May 2025 00:55:13 +0000 (0:00:01.268) 0:09:26.012 ********** 2025-05-17 00:58:37.310276 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310280 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310283 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310287 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310291 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310294 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310298 | orchestrator | 2025-05-17 00:58:37.310318 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-17 00:58:37.310322 | orchestrator | Saturday 17 May 2025 00:55:14 +0000 (0:00:00.604) 0:09:26.616 ********** 2025-05-17 00:58:37.310326 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.310330 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.310333 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.310337 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310341 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310345 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310348 | orchestrator | 2025-05-17 00:58:37.310352 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-17 00:58:37.310356 | orchestrator | Saturday 17 May 2025 00:55:15 +0000 (0:00:00.814) 0:09:27.431 ********** 2025-05-17 00:58:37.310360 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310363 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310367 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310371 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.310375 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.310378 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.310382 | orchestrator | 2025-05-17 00:58:37.310386 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-17 00:58:37.310389 | orchestrator | Saturday 17 May 2025 00:55:15 +0000 (0:00:00.655) 0:09:28.086 ********** 2025-05-17 00:58:37.310393 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310397 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310400 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310404 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.310408 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.310411 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.310415 | orchestrator | 2025-05-17 00:58:37.310419 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-17 00:58:37.310426 | orchestrator | Saturday 17 May 2025 00:55:16 +0000 (0:00:00.876) 0:09:28.963 ********** 2025-05-17 00:58:37.310430 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310434 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310437 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310441 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.310445 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.310448 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.310452 | orchestrator | 2025-05-17 00:58:37.310456 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-17 00:58:37.310460 | orchestrator | Saturday 17 May 2025 00:55:17 +0000 (0:00:00.649) 0:09:29.613 ********** 2025-05-17 00:58:37.310463 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310467 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310471 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310474 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310480 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310487 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310491 | orchestrator | 2025-05-17 00:58:37.310494 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-17 00:58:37.310498 | orchestrator | Saturday 17 May 2025 00:55:18 +0000 (0:00:00.850) 0:09:30.464 ********** 2025-05-17 00:58:37.310502 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310506 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310509 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310513 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310516 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310520 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310524 | orchestrator | 2025-05-17 00:58:37.310528 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-17 00:58:37.310531 | orchestrator | Saturday 17 May 2025 00:55:18 +0000 (0:00:00.623) 0:09:31.087 ********** 2025-05-17 00:58:37.310535 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.310539 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.310543 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.310546 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310550 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310554 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310557 | orchestrator | 2025-05-17 00:58:37.310561 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-17 00:58:37.310565 | orchestrator | Saturday 17 May 2025 00:55:19 +0000 (0:00:00.827) 0:09:31.914 ********** 2025-05-17 00:58:37.310568 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.310572 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.310576 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.310580 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.310583 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.310587 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.310591 | orchestrator | 2025-05-17 00:58:37.310594 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-17 00:58:37.310598 | orchestrator | Saturday 17 May 2025 00:55:20 +0000 (0:00:00.644) 0:09:32.558 ********** 2025-05-17 00:58:37.310602 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310605 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310609 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310613 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310616 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310620 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310624 | orchestrator | 2025-05-17 00:58:37.310639 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-17 00:58:37.310644 | orchestrator | Saturday 17 May 2025 00:55:21 +0000 (0:00:00.952) 0:09:33.510 ********** 2025-05-17 00:58:37.310647 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310655 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310658 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310662 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310666 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310669 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310673 | orchestrator | 2025-05-17 00:58:37.310677 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-17 00:58:37.310681 | orchestrator | Saturday 17 May 2025 00:55:21 +0000 (0:00:00.656) 0:09:34.167 ********** 2025-05-17 00:58:37.310684 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310688 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310692 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310696 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310699 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310706 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310709 | orchestrator | 2025-05-17 00:58:37.310713 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-17 00:58:37.310717 | orchestrator | Saturday 17 May 2025 00:55:22 +0000 (0:00:00.861) 0:09:35.029 ********** 2025-05-17 00:58:37.310721 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310724 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310728 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310732 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310736 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310739 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310743 | orchestrator | 2025-05-17 00:58:37.310747 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-17 00:58:37.310751 | orchestrator | Saturday 17 May 2025 00:55:23 +0000 (0:00:00.707) 0:09:35.736 ********** 2025-05-17 00:58:37.310754 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310758 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310762 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310765 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310769 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310773 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310776 | orchestrator | 2025-05-17 00:58:37.310780 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-17 00:58:37.310784 | orchestrator | Saturday 17 May 2025 00:55:24 +0000 (0:00:00.925) 0:09:36.662 ********** 2025-05-17 00:58:37.310787 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310791 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310795 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310798 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310802 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310806 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310810 | orchestrator | 2025-05-17 00:58:37.310813 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-17 00:58:37.310817 | orchestrator | Saturday 17 May 2025 00:55:25 +0000 (0:00:00.645) 0:09:37.307 ********** 2025-05-17 00:58:37.310821 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310825 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310828 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310832 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310836 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310839 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310843 | orchestrator | 2025-05-17 00:58:37.310847 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-17 00:58:37.310851 | orchestrator | Saturday 17 May 2025 00:55:25 +0000 (0:00:00.721) 0:09:38.029 ********** 2025-05-17 00:58:37.310854 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310858 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310862 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310865 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310874 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310878 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310881 | orchestrator | 2025-05-17 00:58:37.310885 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-17 00:58:37.310889 | orchestrator | Saturday 17 May 2025 00:55:26 +0000 (0:00:00.509) 0:09:38.539 ********** 2025-05-17 00:58:37.310892 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310896 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310900 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310903 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310907 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310911 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310915 | orchestrator | 2025-05-17 00:58:37.310918 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-17 00:58:37.310932 | orchestrator | Saturday 17 May 2025 00:55:26 +0000 (0:00:00.580) 0:09:39.119 ********** 2025-05-17 00:58:37.310936 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310940 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310944 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310947 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310951 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310955 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310958 | orchestrator | 2025-05-17 00:58:37.310962 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-17 00:58:37.310966 | orchestrator | Saturday 17 May 2025 00:55:27 +0000 (0:00:00.485) 0:09:39.604 ********** 2025-05-17 00:58:37.310970 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.310973 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.310977 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.310981 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.310985 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.310988 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.310992 | orchestrator | 2025-05-17 00:58:37.310996 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-17 00:58:37.310999 | orchestrator | Saturday 17 May 2025 00:55:27 +0000 (0:00:00.673) 0:09:40.278 ********** 2025-05-17 00:58:37.311015 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311020 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311023 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311027 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311031 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311034 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311038 | orchestrator | 2025-05-17 00:58:37.311042 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-17 00:58:37.311046 | orchestrator | Saturday 17 May 2025 00:55:28 +0000 (0:00:00.543) 0:09:40.821 ********** 2025-05-17 00:58:37.311049 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-17 00:58:37.311053 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-17 00:58:37.311057 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311060 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-17 00:58:37.311064 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-17 00:58:37.311068 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311074 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-17 00:58:37.311078 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-17 00:58:37.311082 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311086 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-17 00:58:37.311089 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-17 00:58:37.311093 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-17 00:58:37.311097 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-17 00:58:37.311104 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311107 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311111 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-17 00:58:37.311115 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-17 00:58:37.311118 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311122 | orchestrator | 2025-05-17 00:58:37.311126 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-17 00:58:37.311130 | orchestrator | Saturday 17 May 2025 00:55:29 +0000 (0:00:00.731) 0:09:41.553 ********** 2025-05-17 00:58:37.311133 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-17 00:58:37.311137 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-17 00:58:37.311141 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311144 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-17 00:58:37.311148 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-17 00:58:37.311152 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311155 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-17 00:58:37.311159 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-17 00:58:37.311163 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311167 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-17 00:58:37.311170 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-17 00:58:37.311174 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311178 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-17 00:58:37.311182 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-17 00:58:37.311185 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311189 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-17 00:58:37.311193 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-17 00:58:37.311196 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311200 | orchestrator | 2025-05-17 00:58:37.311204 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-17 00:58:37.311207 | orchestrator | Saturday 17 May 2025 00:55:29 +0000 (0:00:00.675) 0:09:42.229 ********** 2025-05-17 00:58:37.311211 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311215 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311219 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311222 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311226 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311230 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311233 | orchestrator | 2025-05-17 00:58:37.311237 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-17 00:58:37.311241 | orchestrator | Saturday 17 May 2025 00:55:30 +0000 (0:00:00.769) 0:09:42.998 ********** 2025-05-17 00:58:37.311244 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311248 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311252 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311256 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311259 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311263 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311267 | orchestrator | 2025-05-17 00:58:37.311271 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-17 00:58:37.311275 | orchestrator | Saturday 17 May 2025 00:55:31 +0000 (0:00:00.684) 0:09:43.683 ********** 2025-05-17 00:58:37.311278 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311282 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311285 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311289 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311293 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311300 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311303 | orchestrator | 2025-05-17 00:58:37.311307 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-17 00:58:37.311311 | orchestrator | Saturday 17 May 2025 00:55:32 +0000 (0:00:00.814) 0:09:44.498 ********** 2025-05-17 00:58:37.311315 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311318 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311322 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311326 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311330 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311333 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311337 | orchestrator | 2025-05-17 00:58:37.311351 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-17 00:58:37.311355 | orchestrator | Saturday 17 May 2025 00:55:32 +0000 (0:00:00.640) 0:09:45.139 ********** 2025-05-17 00:58:37.311359 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311363 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311366 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311370 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311374 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311378 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311381 | orchestrator | 2025-05-17 00:58:37.311385 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-17 00:58:37.311389 | orchestrator | Saturday 17 May 2025 00:55:33 +0000 (0:00:00.879) 0:09:46.019 ********** 2025-05-17 00:58:37.311392 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311396 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311400 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311403 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311407 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311413 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311417 | orchestrator | 2025-05-17 00:58:37.311421 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-17 00:58:37.311425 | orchestrator | Saturday 17 May 2025 00:55:34 +0000 (0:00:00.679) 0:09:46.698 ********** 2025-05-17 00:58:37.311428 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.311432 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.311436 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.311439 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311443 | orchestrator | 2025-05-17 00:58:37.311447 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-17 00:58:37.311451 | orchestrator | Saturday 17 May 2025 00:55:34 +0000 (0:00:00.453) 0:09:47.152 ********** 2025-05-17 00:58:37.311454 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.311458 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.311462 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.311465 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311469 | orchestrator | 2025-05-17 00:58:37.311473 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-17 00:58:37.311476 | orchestrator | Saturday 17 May 2025 00:55:35 +0000 (0:00:00.425) 0:09:47.577 ********** 2025-05-17 00:58:37.311480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.311484 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.311488 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.311491 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311495 | orchestrator | 2025-05-17 00:58:37.311499 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.311502 | orchestrator | Saturday 17 May 2025 00:55:35 +0000 (0:00:00.657) 0:09:48.234 ********** 2025-05-17 00:58:37.311506 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311513 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311517 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311520 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311524 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311528 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311531 | orchestrator | 2025-05-17 00:58:37.311535 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-17 00:58:37.311539 | orchestrator | Saturday 17 May 2025 00:55:36 +0000 (0:00:00.925) 0:09:49.160 ********** 2025-05-17 00:58:37.311543 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-17 00:58:37.311546 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311550 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-17 00:58:37.311554 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-17 00:58:37.311557 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311561 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-17 00:58:37.311565 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311569 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311572 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-17 00:58:37.311576 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311580 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-17 00:58:37.311583 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311587 | orchestrator | 2025-05-17 00:58:37.311591 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-17 00:58:37.311594 | orchestrator | Saturday 17 May 2025 00:55:37 +0000 (0:00:01.056) 0:09:50.216 ********** 2025-05-17 00:58:37.311598 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311602 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311605 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311609 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311613 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311616 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311620 | orchestrator | 2025-05-17 00:58:37.311624 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.311628 | orchestrator | Saturday 17 May 2025 00:55:39 +0000 (0:00:01.187) 0:09:51.404 ********** 2025-05-17 00:58:37.311631 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311635 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311639 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311642 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311646 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311650 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311653 | orchestrator | 2025-05-17 00:58:37.311657 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-17 00:58:37.311661 | orchestrator | Saturday 17 May 2025 00:55:39 +0000 (0:00:00.718) 0:09:52.122 ********** 2025-05-17 00:58:37.311665 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-17 00:58:37.311668 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-17 00:58:37.311672 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311686 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311690 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-17 00:58:37.311694 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311697 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-17 00:58:37.311701 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311705 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-17 00:58:37.311709 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311712 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-17 00:58:37.311716 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311720 | orchestrator | 2025-05-17 00:58:37.311723 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-17 00:58:37.311730 | orchestrator | Saturday 17 May 2025 00:55:40 +0000 (0:00:01.035) 0:09:53.158 ********** 2025-05-17 00:58:37.311734 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311738 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311741 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311747 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.311751 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311755 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.311759 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311763 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.311766 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311770 | orchestrator | 2025-05-17 00:58:37.311774 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-17 00:58:37.311777 | orchestrator | Saturday 17 May 2025 00:55:41 +0000 (0:00:00.616) 0:09:53.774 ********** 2025-05-17 00:58:37.311781 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-17 00:58:37.311785 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-17 00:58:37.311788 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-17 00:58:37.311792 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311796 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-17 00:58:37.311799 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-17 00:58:37.311803 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-17 00:58:37.311807 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311810 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-17 00:58:37.311814 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-17 00:58:37.311818 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-17 00:58:37.311821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.311825 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.311829 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.311833 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311836 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-17 00:58:37.311840 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-17 00:58:37.311844 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311847 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-17 00:58:37.311851 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311855 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-17 00:58:37.311858 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-17 00:58:37.311862 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-17 00:58:37.311865 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311869 | orchestrator | 2025-05-17 00:58:37.311873 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-17 00:58:37.311877 | orchestrator | Saturday 17 May 2025 00:55:42 +0000 (0:00:01.207) 0:09:54.982 ********** 2025-05-17 00:58:37.311880 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311884 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311888 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311891 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311895 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311899 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311903 | orchestrator | 2025-05-17 00:58:37.311906 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-17 00:58:37.311913 | orchestrator | Saturday 17 May 2025 00:55:43 +0000 (0:00:00.938) 0:09:55.920 ********** 2025-05-17 00:58:37.311917 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.311952 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.311960 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.311966 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-17 00:58:37.311971 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.311977 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-17 00:58:37.311984 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.311989 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-17 00:58:37.311993 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.311996 | orchestrator | 2025-05-17 00:58:37.312000 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-17 00:58:37.312004 | orchestrator | Saturday 17 May 2025 00:55:44 +0000 (0:00:01.212) 0:09:57.133 ********** 2025-05-17 00:58:37.312008 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.312011 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.312015 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.312019 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312022 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312026 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312030 | orchestrator | 2025-05-17 00:58:37.312047 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-17 00:58:37.312052 | orchestrator | Saturday 17 May 2025 00:55:45 +0000 (0:00:01.148) 0:09:58.281 ********** 2025-05-17 00:58:37.312056 | orchestrator | skipping: [testbed-node-0] 2025-05-17 00:58:37.312060 | orchestrator | skipping: [testbed-node-1] 2025-05-17 00:58:37.312063 | orchestrator | skipping: [testbed-node-2] 2025-05-17 00:58:37.312067 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312071 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312074 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312078 | orchestrator | 2025-05-17 00:58:37.312082 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-05-17 00:58:37.312086 | orchestrator | Saturday 17 May 2025 00:55:47 +0000 (0:00:01.139) 0:09:59.420 ********** 2025-05-17 00:58:37.312090 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.312093 | orchestrator | 2025-05-17 00:58:37.312097 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-05-17 00:58:37.312105 | orchestrator | Saturday 17 May 2025 00:55:50 +0000 (0:00:03.345) 0:10:02.766 ********** 2025-05-17 00:58:37.312109 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.312113 | orchestrator | 2025-05-17 00:58:37.312117 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-05-17 00:58:37.312121 | orchestrator | Saturday 17 May 2025 00:55:52 +0000 (0:00:01.751) 0:10:04.517 ********** 2025-05-17 00:58:37.312124 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.312128 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.312132 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.312136 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.312139 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.312143 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.312147 | orchestrator | 2025-05-17 00:58:37.312151 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-05-17 00:58:37.312154 | orchestrator | Saturday 17 May 2025 00:55:53 +0000 (0:00:01.517) 0:10:06.035 ********** 2025-05-17 00:58:37.312158 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.312162 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.312166 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.312169 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.312173 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.312177 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.312180 | orchestrator | 2025-05-17 00:58:37.312184 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-05-17 00:58:37.312192 | orchestrator | Saturday 17 May 2025 00:55:54 +0000 (0:00:01.230) 0:10:07.265 ********** 2025-05-17 00:58:37.312196 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.312201 | orchestrator | 2025-05-17 00:58:37.312204 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-05-17 00:58:37.312208 | orchestrator | Saturday 17 May 2025 00:55:56 +0000 (0:00:01.247) 0:10:08.512 ********** 2025-05-17 00:58:37.312212 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.312216 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.312219 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.312223 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.312227 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.312231 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.312243 | orchestrator | 2025-05-17 00:58:37.312247 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-05-17 00:58:37.312251 | orchestrator | Saturday 17 May 2025 00:55:57 +0000 (0:00:01.741) 0:10:10.254 ********** 2025-05-17 00:58:37.312254 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.312258 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.312268 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.312272 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.312275 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.312279 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.312283 | orchestrator | 2025-05-17 00:58:37.312286 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-05-17 00:58:37.312290 | orchestrator | Saturday 17 May 2025 00:56:01 +0000 (0:00:03.905) 0:10:14.159 ********** 2025-05-17 00:58:37.312294 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.312298 | orchestrator | 2025-05-17 00:58:37.312302 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-05-17 00:58:37.312305 | orchestrator | Saturday 17 May 2025 00:56:03 +0000 (0:00:01.508) 0:10:15.668 ********** 2025-05-17 00:58:37.312309 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.312313 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.312317 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.312320 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.312324 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.312328 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.312332 | orchestrator | 2025-05-17 00:58:37.312335 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-05-17 00:58:37.312339 | orchestrator | Saturday 17 May 2025 00:56:04 +0000 (0:00:00.798) 0:10:16.467 ********** 2025-05-17 00:58:37.312343 | orchestrator | changed: [testbed-node-0] 2025-05-17 00:58:37.312347 | orchestrator | changed: [testbed-node-1] 2025-05-17 00:58:37.312350 | orchestrator | changed: [testbed-node-2] 2025-05-17 00:58:37.312354 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.312358 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.312361 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.312365 | orchestrator | 2025-05-17 00:58:37.312369 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-05-17 00:58:37.312373 | orchestrator | Saturday 17 May 2025 00:56:06 +0000 (0:00:02.468) 0:10:18.935 ********** 2025-05-17 00:58:37.312376 | orchestrator | ok: [testbed-node-0] 2025-05-17 00:58:37.312380 | orchestrator | ok: [testbed-node-1] 2025-05-17 00:58:37.312384 | orchestrator | ok: [testbed-node-2] 2025-05-17 00:58:37.312388 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.312391 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.312395 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.312399 | orchestrator | 2025-05-17 00:58:37.312405 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-17 00:58:37.312412 | orchestrator | 2025-05-17 00:58:37.312415 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-17 00:58:37.312419 | orchestrator | Saturday 17 May 2025 00:56:09 +0000 (0:00:02.479) 0:10:21.414 ********** 2025-05-17 00:58:37.312423 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.312427 | orchestrator | 2025-05-17 00:58:37.312431 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-17 00:58:37.312434 | orchestrator | Saturday 17 May 2025 00:56:09 +0000 (0:00:00.721) 0:10:22.135 ********** 2025-05-17 00:58:37.312438 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312442 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312446 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312449 | orchestrator | 2025-05-17 00:58:37.312456 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-17 00:58:37.312460 | orchestrator | Saturday 17 May 2025 00:56:10 +0000 (0:00:00.297) 0:10:22.433 ********** 2025-05-17 00:58:37.312464 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.312468 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.312471 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.312475 | orchestrator | 2025-05-17 00:58:37.312479 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-17 00:58:37.312483 | orchestrator | Saturday 17 May 2025 00:56:10 +0000 (0:00:00.714) 0:10:23.147 ********** 2025-05-17 00:58:37.312486 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.312490 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.312494 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.312498 | orchestrator | 2025-05-17 00:58:37.312501 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-17 00:58:37.312505 | orchestrator | Saturday 17 May 2025 00:56:11 +0000 (0:00:00.704) 0:10:23.851 ********** 2025-05-17 00:58:37.312509 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.312513 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.312516 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.312520 | orchestrator | 2025-05-17 00:58:37.312524 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-17 00:58:37.312528 | orchestrator | Saturday 17 May 2025 00:56:12 +0000 (0:00:00.864) 0:10:24.716 ********** 2025-05-17 00:58:37.312531 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312535 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312539 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312543 | orchestrator | 2025-05-17 00:58:37.312546 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-17 00:58:37.312550 | orchestrator | Saturday 17 May 2025 00:56:12 +0000 (0:00:00.332) 0:10:25.048 ********** 2025-05-17 00:58:37.312554 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312558 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312561 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312565 | orchestrator | 2025-05-17 00:58:37.312569 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-17 00:58:37.312573 | orchestrator | Saturday 17 May 2025 00:56:13 +0000 (0:00:00.302) 0:10:25.351 ********** 2025-05-17 00:58:37.312576 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312580 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312584 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312588 | orchestrator | 2025-05-17 00:58:37.312591 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-17 00:58:37.312595 | orchestrator | Saturday 17 May 2025 00:56:13 +0000 (0:00:00.299) 0:10:25.651 ********** 2025-05-17 00:58:37.312599 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312603 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312606 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312610 | orchestrator | 2025-05-17 00:58:37.312614 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-17 00:58:37.312620 | orchestrator | Saturday 17 May 2025 00:56:13 +0000 (0:00:00.545) 0:10:26.196 ********** 2025-05-17 00:58:37.312624 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312628 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312632 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312635 | orchestrator | 2025-05-17 00:58:37.312639 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-17 00:58:37.312643 | orchestrator | Saturday 17 May 2025 00:56:14 +0000 (0:00:00.311) 0:10:26.508 ********** 2025-05-17 00:58:37.312646 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312650 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312654 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312658 | orchestrator | 2025-05-17 00:58:37.312661 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-17 00:58:37.312665 | orchestrator | Saturday 17 May 2025 00:56:14 +0000 (0:00:00.304) 0:10:26.812 ********** 2025-05-17 00:58:37.312669 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.312673 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.312676 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.312680 | orchestrator | 2025-05-17 00:58:37.312684 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-17 00:58:37.312687 | orchestrator | Saturday 17 May 2025 00:56:15 +0000 (0:00:00.698) 0:10:27.511 ********** 2025-05-17 00:58:37.312691 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312695 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312699 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312702 | orchestrator | 2025-05-17 00:58:37.312706 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-17 00:58:37.312710 | orchestrator | Saturday 17 May 2025 00:56:15 +0000 (0:00:00.548) 0:10:28.060 ********** 2025-05-17 00:58:37.312714 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312717 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312721 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312725 | orchestrator | 2025-05-17 00:58:37.312728 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-17 00:58:37.312732 | orchestrator | Saturday 17 May 2025 00:56:16 +0000 (0:00:00.311) 0:10:28.371 ********** 2025-05-17 00:58:37.312736 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.312740 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.312747 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.312751 | orchestrator | 2025-05-17 00:58:37.312755 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-17 00:58:37.312759 | orchestrator | Saturday 17 May 2025 00:56:16 +0000 (0:00:00.350) 0:10:28.722 ********** 2025-05-17 00:58:37.312762 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.312766 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.312770 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.312774 | orchestrator | 2025-05-17 00:58:37.312777 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-17 00:58:37.312781 | orchestrator | Saturday 17 May 2025 00:56:16 +0000 (0:00:00.339) 0:10:29.062 ********** 2025-05-17 00:58:37.312785 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.312789 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.312792 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.312796 | orchestrator | 2025-05-17 00:58:37.312800 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-17 00:58:37.312806 | orchestrator | Saturday 17 May 2025 00:56:17 +0000 (0:00:00.620) 0:10:29.683 ********** 2025-05-17 00:58:37.312810 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312814 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312817 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312821 | orchestrator | 2025-05-17 00:58:37.312825 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-17 00:58:37.312829 | orchestrator | Saturday 17 May 2025 00:56:17 +0000 (0:00:00.357) 0:10:30.040 ********** 2025-05-17 00:58:37.312835 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312839 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312842 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312846 | orchestrator | 2025-05-17 00:58:37.312850 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-17 00:58:37.312853 | orchestrator | Saturday 17 May 2025 00:56:18 +0000 (0:00:00.375) 0:10:30.416 ********** 2025-05-17 00:58:37.312857 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312861 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312865 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312868 | orchestrator | 2025-05-17 00:58:37.312872 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-17 00:58:37.312876 | orchestrator | Saturday 17 May 2025 00:56:18 +0000 (0:00:00.335) 0:10:30.752 ********** 2025-05-17 00:58:37.312880 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.312883 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.312887 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.312891 | orchestrator | 2025-05-17 00:58:37.312895 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-17 00:58:37.312899 | orchestrator | Saturday 17 May 2025 00:56:19 +0000 (0:00:00.686) 0:10:31.438 ********** 2025-05-17 00:58:37.312902 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312906 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312910 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312913 | orchestrator | 2025-05-17 00:58:37.312917 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-17 00:58:37.312938 | orchestrator | Saturday 17 May 2025 00:56:19 +0000 (0:00:00.455) 0:10:31.894 ********** 2025-05-17 00:58:37.312943 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312947 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312950 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312954 | orchestrator | 2025-05-17 00:58:37.312958 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-17 00:58:37.312962 | orchestrator | Saturday 17 May 2025 00:56:19 +0000 (0:00:00.381) 0:10:32.275 ********** 2025-05-17 00:58:37.312965 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312969 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312973 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312977 | orchestrator | 2025-05-17 00:58:37.312980 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-17 00:58:37.312984 | orchestrator | Saturday 17 May 2025 00:56:20 +0000 (0:00:00.315) 0:10:32.590 ********** 2025-05-17 00:58:37.312988 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.312992 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.312995 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.312999 | orchestrator | 2025-05-17 00:58:37.313003 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-17 00:58:37.313007 | orchestrator | Saturday 17 May 2025 00:56:20 +0000 (0:00:00.460) 0:10:33.051 ********** 2025-05-17 00:58:37.313010 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313014 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313018 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313021 | orchestrator | 2025-05-17 00:58:37.313025 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-17 00:58:37.313029 | orchestrator | Saturday 17 May 2025 00:56:21 +0000 (0:00:00.292) 0:10:33.344 ********** 2025-05-17 00:58:37.313033 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313036 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313040 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313044 | orchestrator | 2025-05-17 00:58:37.313047 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-17 00:58:37.313051 | orchestrator | Saturday 17 May 2025 00:56:21 +0000 (0:00:00.314) 0:10:33.658 ********** 2025-05-17 00:58:37.313055 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313061 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313065 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313068 | orchestrator | 2025-05-17 00:58:37.313072 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-17 00:58:37.313076 | orchestrator | Saturday 17 May 2025 00:56:21 +0000 (0:00:00.281) 0:10:33.940 ********** 2025-05-17 00:58:37.313080 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313083 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313087 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313091 | orchestrator | 2025-05-17 00:58:37.313095 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-17 00:58:37.313098 | orchestrator | Saturday 17 May 2025 00:56:22 +0000 (0:00:00.480) 0:10:34.421 ********** 2025-05-17 00:58:37.313105 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313108 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313112 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313116 | orchestrator | 2025-05-17 00:58:37.313120 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-17 00:58:37.313123 | orchestrator | Saturday 17 May 2025 00:56:22 +0000 (0:00:00.312) 0:10:34.733 ********** 2025-05-17 00:58:37.313127 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313131 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313135 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313138 | orchestrator | 2025-05-17 00:58:37.313142 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-17 00:58:37.313146 | orchestrator | Saturday 17 May 2025 00:56:22 +0000 (0:00:00.270) 0:10:35.003 ********** 2025-05-17 00:58:37.313149 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313153 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313157 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313160 | orchestrator | 2025-05-17 00:58:37.313166 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-17 00:58:37.313170 | orchestrator | Saturday 17 May 2025 00:56:23 +0000 (0:00:00.305) 0:10:35.309 ********** 2025-05-17 00:58:37.313174 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313178 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313181 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313185 | orchestrator | 2025-05-17 00:58:37.313189 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-17 00:58:37.313192 | orchestrator | Saturday 17 May 2025 00:56:23 +0000 (0:00:00.601) 0:10:35.910 ********** 2025-05-17 00:58:37.313196 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-17 00:58:37.313200 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-17 00:58:37.313204 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313207 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-17 00:58:37.313211 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-17 00:58:37.313215 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313218 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-17 00:58:37.313222 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-17 00:58:37.313226 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313230 | orchestrator | 2025-05-17 00:58:37.313233 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-17 00:58:37.313237 | orchestrator | Saturday 17 May 2025 00:56:24 +0000 (0:00:00.384) 0:10:36.295 ********** 2025-05-17 00:58:37.313241 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-17 00:58:37.313245 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-17 00:58:37.313248 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-17 00:58:37.313252 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-17 00:58:37.313261 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313265 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313269 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-17 00:58:37.313272 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-17 00:58:37.313276 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313280 | orchestrator | 2025-05-17 00:58:37.313284 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-17 00:58:37.313287 | orchestrator | Saturday 17 May 2025 00:56:24 +0000 (0:00:00.306) 0:10:36.602 ********** 2025-05-17 00:58:37.313291 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313295 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313298 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313302 | orchestrator | 2025-05-17 00:58:37.313306 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-17 00:58:37.313309 | orchestrator | Saturday 17 May 2025 00:56:24 +0000 (0:00:00.285) 0:10:36.887 ********** 2025-05-17 00:58:37.313313 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313317 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313321 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313324 | orchestrator | 2025-05-17 00:58:37.313328 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-17 00:58:37.313332 | orchestrator | Saturday 17 May 2025 00:56:25 +0000 (0:00:00.433) 0:10:37.321 ********** 2025-05-17 00:58:37.313336 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313339 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313343 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313347 | orchestrator | 2025-05-17 00:58:37.313350 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-17 00:58:37.313354 | orchestrator | Saturday 17 May 2025 00:56:25 +0000 (0:00:00.308) 0:10:37.629 ********** 2025-05-17 00:58:37.313358 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313361 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313365 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313369 | orchestrator | 2025-05-17 00:58:37.313372 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-17 00:58:37.313376 | orchestrator | Saturday 17 May 2025 00:56:25 +0000 (0:00:00.301) 0:10:37.931 ********** 2025-05-17 00:58:37.313380 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313384 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313387 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313391 | orchestrator | 2025-05-17 00:58:37.313395 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-17 00:58:37.313398 | orchestrator | Saturday 17 May 2025 00:56:25 +0000 (0:00:00.263) 0:10:38.194 ********** 2025-05-17 00:58:37.313402 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313406 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313410 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313413 | orchestrator | 2025-05-17 00:58:37.313417 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-17 00:58:37.313423 | orchestrator | Saturday 17 May 2025 00:56:26 +0000 (0:00:00.452) 0:10:38.646 ********** 2025-05-17 00:58:37.313427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.313431 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.313434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.313438 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313442 | orchestrator | 2025-05-17 00:58:37.313445 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-17 00:58:37.313449 | orchestrator | Saturday 17 May 2025 00:56:26 +0000 (0:00:00.411) 0:10:39.058 ********** 2025-05-17 00:58:37.313453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.313460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.313464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.313467 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313471 | orchestrator | 2025-05-17 00:58:37.313477 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-17 00:58:37.313481 | orchestrator | Saturday 17 May 2025 00:56:27 +0000 (0:00:00.450) 0:10:39.509 ********** 2025-05-17 00:58:37.313485 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.313488 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.313492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.313496 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313499 | orchestrator | 2025-05-17 00:58:37.313503 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.313507 | orchestrator | Saturday 17 May 2025 00:56:27 +0000 (0:00:00.410) 0:10:39.919 ********** 2025-05-17 00:58:37.313511 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313514 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313518 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313522 | orchestrator | 2025-05-17 00:58:37.313525 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-17 00:58:37.313529 | orchestrator | Saturday 17 May 2025 00:56:27 +0000 (0:00:00.327) 0:10:40.247 ********** 2025-05-17 00:58:37.313533 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-17 00:58:37.313537 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313540 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-17 00:58:37.313544 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313548 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-17 00:58:37.313552 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313555 | orchestrator | 2025-05-17 00:58:37.313559 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-17 00:58:37.313563 | orchestrator | Saturday 17 May 2025 00:56:28 +0000 (0:00:00.447) 0:10:40.694 ********** 2025-05-17 00:58:37.313566 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313570 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313574 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313577 | orchestrator | 2025-05-17 00:58:37.313581 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.313585 | orchestrator | Saturday 17 May 2025 00:56:28 +0000 (0:00:00.565) 0:10:41.260 ********** 2025-05-17 00:58:37.313589 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313592 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313596 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313600 | orchestrator | 2025-05-17 00:58:37.313603 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-17 00:58:37.313607 | orchestrator | Saturday 17 May 2025 00:56:29 +0000 (0:00:00.322) 0:10:41.583 ********** 2025-05-17 00:58:37.313611 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-17 00:58:37.313614 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313618 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-17 00:58:37.313622 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313625 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-17 00:58:37.313629 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313633 | orchestrator | 2025-05-17 00:58:37.313637 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-17 00:58:37.313640 | orchestrator | Saturday 17 May 2025 00:56:29 +0000 (0:00:00.452) 0:10:42.035 ********** 2025-05-17 00:58:37.313644 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.313648 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313652 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.313659 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313662 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.313666 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313670 | orchestrator | 2025-05-17 00:58:37.313674 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-17 00:58:37.313677 | orchestrator | Saturday 17 May 2025 00:56:30 +0000 (0:00:00.344) 0:10:42.380 ********** 2025-05-17 00:58:37.313681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.313685 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.313688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.313692 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-17 00:58:37.313696 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-17 00:58:37.313699 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-17 00:58:37.313703 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313707 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313713 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-17 00:58:37.313717 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-17 00:58:37.313721 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-17 00:58:37.313724 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313728 | orchestrator | 2025-05-17 00:58:37.313732 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-17 00:58:37.313736 | orchestrator | Saturday 17 May 2025 00:56:31 +0000 (0:00:00.919) 0:10:43.299 ********** 2025-05-17 00:58:37.313739 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313743 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313747 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313751 | orchestrator | 2025-05-17 00:58:37.313754 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-17 00:58:37.313758 | orchestrator | Saturday 17 May 2025 00:56:31 +0000 (0:00:00.564) 0:10:43.864 ********** 2025-05-17 00:58:37.313762 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-17 00:58:37.313768 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313772 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-17 00:58:37.313776 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313779 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-17 00:58:37.313783 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313787 | orchestrator | 2025-05-17 00:58:37.313790 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-17 00:58:37.313794 | orchestrator | Saturday 17 May 2025 00:56:32 +0000 (0:00:00.935) 0:10:44.799 ********** 2025-05-17 00:58:37.313798 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313801 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313805 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313809 | orchestrator | 2025-05-17 00:58:37.313812 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-17 00:58:37.313816 | orchestrator | Saturday 17 May 2025 00:56:33 +0000 (0:00:00.576) 0:10:45.376 ********** 2025-05-17 00:58:37.313820 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313823 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313827 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313831 | orchestrator | 2025-05-17 00:58:37.313835 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-05-17 00:58:37.313838 | orchestrator | Saturday 17 May 2025 00:56:33 +0000 (0:00:00.813) 0:10:46.189 ********** 2025-05-17 00:58:37.313842 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.313848 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.313852 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-17 00:58:37.313856 | orchestrator | 2025-05-17 00:58:37.313860 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-05-17 00:58:37.313863 | orchestrator | Saturday 17 May 2025 00:56:34 +0000 (0:00:00.346) 0:10:46.536 ********** 2025-05-17 00:58:37.313867 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-17 00:58:37.313871 | orchestrator | 2025-05-17 00:58:37.313875 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-05-17 00:58:37.313878 | orchestrator | Saturday 17 May 2025 00:56:35 +0000 (0:00:01.687) 0:10:48.224 ********** 2025-05-17 00:58:37.313883 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-17 00:58:37.313889 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.313892 | orchestrator | 2025-05-17 00:58:37.313896 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-05-17 00:58:37.313900 | orchestrator | Saturday 17 May 2025 00:56:36 +0000 (0:00:00.461) 0:10:48.686 ********** 2025-05-17 00:58:37.313905 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-17 00:58:37.313914 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-17 00:58:37.313918 | orchestrator | 2025-05-17 00:58:37.313951 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-05-17 00:58:37.313955 | orchestrator | Saturday 17 May 2025 00:56:42 +0000 (0:00:06.445) 0:10:55.132 ********** 2025-05-17 00:58:37.313959 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-17 00:58:37.313962 | orchestrator | 2025-05-17 00:58:37.313966 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-05-17 00:58:37.313970 | orchestrator | Saturday 17 May 2025 00:56:45 +0000 (0:00:02.981) 0:10:58.113 ********** 2025-05-17 00:58:37.313973 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.313977 | orchestrator | 2025-05-17 00:58:37.313981 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-05-17 00:58:37.313984 | orchestrator | Saturday 17 May 2025 00:56:46 +0000 (0:00:00.521) 0:10:58.634 ********** 2025-05-17 00:58:37.313988 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-17 00:58:37.313992 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-17 00:58:37.313996 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-17 00:58:37.314002 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-17 00:58:37.314005 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-17 00:58:37.314009 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-17 00:58:37.314036 | orchestrator | 2025-05-17 00:58:37.314041 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-05-17 00:58:37.314045 | orchestrator | Saturday 17 May 2025 00:56:47 +0000 (0:00:01.228) 0:10:59.863 ********** 2025-05-17 00:58:37.314049 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 00:58:37.314052 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-17 00:58:37.314061 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-17 00:58:37.314065 | orchestrator | 2025-05-17 00:58:37.314068 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-05-17 00:58:37.314075 | orchestrator | Saturday 17 May 2025 00:56:49 +0000 (0:00:01.879) 0:11:01.743 ********** 2025-05-17 00:58:37.314079 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-17 00:58:37.314083 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-17 00:58:37.314086 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.314090 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-17 00:58:37.314094 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-17 00:58:37.314098 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.314101 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-17 00:58:37.314105 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-17 00:58:37.314109 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.314113 | orchestrator | 2025-05-17 00:58:37.314116 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-05-17 00:58:37.314120 | orchestrator | Saturday 17 May 2025 00:56:50 +0000 (0:00:01.155) 0:11:02.898 ********** 2025-05-17 00:58:37.314124 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.314128 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.314132 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.314135 | orchestrator | 2025-05-17 00:58:37.314139 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-05-17 00:58:37.314143 | orchestrator | Saturday 17 May 2025 00:56:50 +0000 (0:00:00.270) 0:11:03.169 ********** 2025-05-17 00:58:37.314146 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.314150 | orchestrator | 2025-05-17 00:58:37.314154 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-05-17 00:58:37.314157 | orchestrator | Saturday 17 May 2025 00:56:51 +0000 (0:00:00.660) 0:11:03.829 ********** 2025-05-17 00:58:37.314161 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.314165 | orchestrator | 2025-05-17 00:58:37.314169 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-05-17 00:58:37.314172 | orchestrator | Saturday 17 May 2025 00:56:52 +0000 (0:00:00.458) 0:11:04.287 ********** 2025-05-17 00:58:37.314176 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.314180 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.314184 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.314187 | orchestrator | 2025-05-17 00:58:37.314191 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-05-17 00:58:37.314195 | orchestrator | Saturday 17 May 2025 00:56:53 +0000 (0:00:01.288) 0:11:05.576 ********** 2025-05-17 00:58:37.314199 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.314202 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.314206 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.314210 | orchestrator | 2025-05-17 00:58:37.314213 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-05-17 00:58:37.314217 | orchestrator | Saturday 17 May 2025 00:56:54 +0000 (0:00:01.142) 0:11:06.718 ********** 2025-05-17 00:58:37.314221 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.314225 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.314229 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.314232 | orchestrator | 2025-05-17 00:58:37.314236 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-05-17 00:58:37.314240 | orchestrator | Saturday 17 May 2025 00:56:56 +0000 (0:00:01.658) 0:11:08.377 ********** 2025-05-17 00:58:37.314243 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.314247 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.314251 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.314255 | orchestrator | 2025-05-17 00:58:37.314261 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-05-17 00:58:37.314265 | orchestrator | Saturday 17 May 2025 00:56:58 +0000 (0:00:02.160) 0:11:10.538 ********** 2025-05-17 00:58:37.314269 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-05-17 00:58:37.314273 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-05-17 00:58:37.314276 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-05-17 00:58:37.314280 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.314284 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.314287 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.314291 | orchestrator | 2025-05-17 00:58:37.314295 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-17 00:58:37.314299 | orchestrator | Saturday 17 May 2025 00:57:15 +0000 (0:00:17.052) 0:11:27.590 ********** 2025-05-17 00:58:37.314302 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.314306 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.314310 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.314313 | orchestrator | 2025-05-17 00:58:37.314317 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-17 00:58:37.314321 | orchestrator | Saturday 17 May 2025 00:57:16 +0000 (0:00:00.699) 0:11:28.290 ********** 2025-05-17 00:58:37.314328 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.314332 | orchestrator | 2025-05-17 00:58:37.314336 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-05-17 00:58:37.314339 | orchestrator | Saturday 17 May 2025 00:57:16 +0000 (0:00:00.730) 0:11:29.020 ********** 2025-05-17 00:58:37.314343 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.314347 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.314351 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.314354 | orchestrator | 2025-05-17 00:58:37.314358 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-17 00:58:37.314362 | orchestrator | Saturday 17 May 2025 00:57:17 +0000 (0:00:00.317) 0:11:29.338 ********** 2025-05-17 00:58:37.314365 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.314369 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.314373 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.314377 | orchestrator | 2025-05-17 00:58:37.314383 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-05-17 00:58:37.314386 | orchestrator | Saturday 17 May 2025 00:57:18 +0000 (0:00:01.205) 0:11:30.543 ********** 2025-05-17 00:58:37.314390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.314394 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.314398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.314402 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.314408 | orchestrator | 2025-05-17 00:58:37.314415 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-17 00:58:37.314419 | orchestrator | Saturday 17 May 2025 00:57:19 +0000 (0:00:00.887) 0:11:31.430 ********** 2025-05-17 00:58:37.314422 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.314426 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.314430 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.314434 | orchestrator | 2025-05-17 00:58:37.314437 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-17 00:58:37.314441 | orchestrator | Saturday 17 May 2025 00:57:19 +0000 (0:00:00.554) 0:11:31.984 ********** 2025-05-17 00:58:37.314445 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.314449 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.314452 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.314456 | orchestrator | 2025-05-17 00:58:37.314460 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-17 00:58:37.314467 | orchestrator | 2025-05-17 00:58:37.314470 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-17 00:58:37.314474 | orchestrator | Saturday 17 May 2025 00:57:21 +0000 (0:00:02.018) 0:11:34.003 ********** 2025-05-17 00:58:37.314478 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.314482 | orchestrator | 2025-05-17 00:58:37.314485 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-17 00:58:37.314489 | orchestrator | Saturday 17 May 2025 00:57:22 +0000 (0:00:00.707) 0:11:34.710 ********** 2025-05-17 00:58:37.314493 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.314496 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.314500 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.314504 | orchestrator | 2025-05-17 00:58:37.314508 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-17 00:58:37.314511 | orchestrator | Saturday 17 May 2025 00:57:22 +0000 (0:00:00.325) 0:11:35.036 ********** 2025-05-17 00:58:37.314515 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.314519 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.314523 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.314527 | orchestrator | 2025-05-17 00:58:37.314530 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-17 00:58:37.314534 | orchestrator | Saturday 17 May 2025 00:57:23 +0000 (0:00:00.726) 0:11:35.762 ********** 2025-05-17 00:58:37.314538 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.314542 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.314545 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.314549 | orchestrator | 2025-05-17 00:58:37.314553 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-17 00:58:37.314557 | orchestrator | Saturday 17 May 2025 00:57:24 +0000 (0:00:01.009) 0:11:36.772 ********** 2025-05-17 00:58:37.314560 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.314564 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.314568 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.314572 | orchestrator | 2025-05-17 00:58:37.314575 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-17 00:58:37.314579 | orchestrator | Saturday 17 May 2025 00:57:25 +0000 (0:00:00.717) 0:11:37.489 ********** 2025-05-17 00:58:37.314583 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.314587 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.314591 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.314594 | orchestrator | 2025-05-17 00:58:37.314598 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-17 00:58:37.314602 | orchestrator | Saturday 17 May 2025 00:57:25 +0000 (0:00:00.335) 0:11:37.825 ********** 2025-05-17 00:58:37.314606 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.314609 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.314613 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.314617 | orchestrator | 2025-05-17 00:58:37.314620 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-17 00:58:37.314624 | orchestrator | Saturday 17 May 2025 00:57:25 +0000 (0:00:00.332) 0:11:38.157 ********** 2025-05-17 00:58:37.314628 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.314632 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.314635 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.314639 | orchestrator | 2025-05-17 00:58:37.314643 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-17 00:58:37.314646 | orchestrator | Saturday 17 May 2025 00:57:26 +0000 (0:00:00.628) 0:11:38.785 ********** 2025-05-17 00:58:37.314650 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.314654 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.314660 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.314671 | orchestrator | 2025-05-17 00:58:37.314675 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-17 00:58:37.314681 | orchestrator | Saturday 17 May 2025 00:57:26 +0000 (0:00:00.341) 0:11:39.127 ********** 2025-05-17 00:58:37.314685 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.314689 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.314693 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.314696 | orchestrator | 2025-05-17 00:58:37.314700 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-17 00:58:37.314709 | orchestrator | Saturday 17 May 2025 00:57:27 +0000 (0:00:00.318) 0:11:39.445 ********** 2025-05-17 00:58:37.314713 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.314717 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.314721 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.314724 | orchestrator | 2025-05-17 00:58:37.314728 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-17 00:58:37.314734 | orchestrator | Saturday 17 May 2025 00:57:27 +0000 (0:00:00.295) 0:11:39.740 ********** 2025-05-17 00:58:37.314738 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.314742 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.314746 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.314750 | orchestrator | 2025-05-17 00:58:37.314754 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-17 00:58:37.314757 | orchestrator | Saturday 17 May 2025 00:57:28 +0000 (0:00:00.884) 0:11:40.625 ********** 2025-05-17 00:58:37.314761 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.314765 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.314768 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.314772 | orchestrator | 2025-05-17 00:58:37.314776 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-17 00:58:37.314780 | orchestrator | Saturday 17 May 2025 00:57:28 +0000 (0:00:00.333) 0:11:40.958 ********** 2025-05-17 00:58:37.314783 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.314787 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.314794 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.314800 | orchestrator | 2025-05-17 00:58:37.314807 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-17 00:58:37.314814 | orchestrator | Saturday 17 May 2025 00:57:28 +0000 (0:00:00.325) 0:11:41.284 ********** 2025-05-17 00:58:37.314825 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.314831 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.314841 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.314847 | orchestrator | 2025-05-17 00:58:37.314852 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-17 00:58:37.314858 | orchestrator | Saturday 17 May 2025 00:57:29 +0000 (0:00:00.334) 0:11:41.618 ********** 2025-05-17 00:58:37.314864 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.314869 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.314875 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.314880 | orchestrator | 2025-05-17 00:58:37.314885 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-17 00:58:37.314891 | orchestrator | Saturday 17 May 2025 00:57:29 +0000 (0:00:00.569) 0:11:42.187 ********** 2025-05-17 00:58:37.314897 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.314903 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.314908 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.314914 | orchestrator | 2025-05-17 00:58:37.314934 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-17 00:58:37.314940 | orchestrator | Saturday 17 May 2025 00:57:30 +0000 (0:00:00.353) 0:11:42.541 ********** 2025-05-17 00:58:37.314946 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.314952 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.314958 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.314964 | orchestrator | 2025-05-17 00:58:37.314970 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-17 00:58:37.314976 | orchestrator | Saturday 17 May 2025 00:57:30 +0000 (0:00:00.315) 0:11:42.856 ********** 2025-05-17 00:58:37.314988 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.314993 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.314996 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315000 | orchestrator | 2025-05-17 00:58:37.315004 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-17 00:58:37.315008 | orchestrator | Saturday 17 May 2025 00:57:30 +0000 (0:00:00.309) 0:11:43.166 ********** 2025-05-17 00:58:37.315011 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315015 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315019 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315022 | orchestrator | 2025-05-17 00:58:37.315026 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-17 00:58:37.315030 | orchestrator | Saturday 17 May 2025 00:57:31 +0000 (0:00:00.562) 0:11:43.729 ********** 2025-05-17 00:58:37.315034 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.315038 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.315041 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.315045 | orchestrator | 2025-05-17 00:58:37.315049 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-17 00:58:37.315053 | orchestrator | Saturday 17 May 2025 00:57:31 +0000 (0:00:00.330) 0:11:44.059 ********** 2025-05-17 00:58:37.315056 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315060 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315064 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315067 | orchestrator | 2025-05-17 00:58:37.315071 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-17 00:58:37.315075 | orchestrator | Saturday 17 May 2025 00:57:32 +0000 (0:00:00.354) 0:11:44.413 ********** 2025-05-17 00:58:37.315079 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315082 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315086 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315090 | orchestrator | 2025-05-17 00:58:37.315093 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-17 00:58:37.315097 | orchestrator | Saturday 17 May 2025 00:57:32 +0000 (0:00:00.329) 0:11:44.743 ********** 2025-05-17 00:58:37.315101 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315104 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315108 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315112 | orchestrator | 2025-05-17 00:58:37.315119 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-17 00:58:37.315123 | orchestrator | Saturday 17 May 2025 00:57:33 +0000 (0:00:00.590) 0:11:45.333 ********** 2025-05-17 00:58:37.315127 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315131 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315134 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315138 | orchestrator | 2025-05-17 00:58:37.315142 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-17 00:58:37.315145 | orchestrator | Saturday 17 May 2025 00:57:33 +0000 (0:00:00.375) 0:11:45.708 ********** 2025-05-17 00:58:37.315149 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315153 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315156 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315160 | orchestrator | 2025-05-17 00:58:37.315164 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-17 00:58:37.315170 | orchestrator | Saturday 17 May 2025 00:57:33 +0000 (0:00:00.325) 0:11:46.034 ********** 2025-05-17 00:58:37.315174 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315178 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315182 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315185 | orchestrator | 2025-05-17 00:58:37.315189 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-17 00:58:37.315193 | orchestrator | Saturday 17 May 2025 00:57:34 +0000 (0:00:00.345) 0:11:46.380 ********** 2025-05-17 00:58:37.315196 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315204 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315208 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315211 | orchestrator | 2025-05-17 00:58:37.315215 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-17 00:58:37.315219 | orchestrator | Saturday 17 May 2025 00:57:34 +0000 (0:00:00.656) 0:11:47.036 ********** 2025-05-17 00:58:37.315223 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315226 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315230 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315234 | orchestrator | 2025-05-17 00:58:37.315237 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-17 00:58:37.315241 | orchestrator | Saturday 17 May 2025 00:57:35 +0000 (0:00:00.329) 0:11:47.366 ********** 2025-05-17 00:58:37.315245 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315249 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315252 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315256 | orchestrator | 2025-05-17 00:58:37.315260 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-17 00:58:37.315264 | orchestrator | Saturday 17 May 2025 00:57:35 +0000 (0:00:00.337) 0:11:47.703 ********** 2025-05-17 00:58:37.315267 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315271 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315275 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315278 | orchestrator | 2025-05-17 00:58:37.315282 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-17 00:58:37.315286 | orchestrator | Saturday 17 May 2025 00:57:35 +0000 (0:00:00.310) 0:11:48.014 ********** 2025-05-17 00:58:37.315289 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315293 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315297 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315300 | orchestrator | 2025-05-17 00:58:37.315304 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-17 00:58:37.315308 | orchestrator | Saturday 17 May 2025 00:57:36 +0000 (0:00:00.571) 0:11:48.586 ********** 2025-05-17 00:58:37.315311 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315315 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315319 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315322 | orchestrator | 2025-05-17 00:58:37.315326 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-17 00:58:37.315330 | orchestrator | Saturday 17 May 2025 00:57:36 +0000 (0:00:00.333) 0:11:48.919 ********** 2025-05-17 00:58:37.315334 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-17 00:58:37.315337 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-17 00:58:37.315341 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315345 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-17 00:58:37.315348 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-17 00:58:37.315352 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315356 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-17 00:58:37.315359 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-17 00:58:37.315363 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315367 | orchestrator | 2025-05-17 00:58:37.315370 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-17 00:58:37.315374 | orchestrator | Saturday 17 May 2025 00:57:36 +0000 (0:00:00.361) 0:11:49.280 ********** 2025-05-17 00:58:37.315378 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-17 00:58:37.315382 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-17 00:58:37.315385 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315389 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-17 00:58:37.315393 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-17 00:58:37.315400 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315403 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-17 00:58:37.315407 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-17 00:58:37.315411 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315414 | orchestrator | 2025-05-17 00:58:37.315418 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-17 00:58:37.315422 | orchestrator | Saturday 17 May 2025 00:57:37 +0000 (0:00:00.349) 0:11:49.630 ********** 2025-05-17 00:58:37.315426 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315429 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315435 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315439 | orchestrator | 2025-05-17 00:58:37.315443 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-17 00:58:37.315447 | orchestrator | Saturday 17 May 2025 00:57:37 +0000 (0:00:00.568) 0:11:50.198 ********** 2025-05-17 00:58:37.315450 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315454 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315458 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315461 | orchestrator | 2025-05-17 00:58:37.315465 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-17 00:58:37.315469 | orchestrator | Saturday 17 May 2025 00:57:38 +0000 (0:00:00.344) 0:11:50.542 ********** 2025-05-17 00:58:37.315473 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315476 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315480 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315484 | orchestrator | 2025-05-17 00:58:37.315490 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-17 00:58:37.315494 | orchestrator | Saturday 17 May 2025 00:57:38 +0000 (0:00:00.322) 0:11:50.865 ********** 2025-05-17 00:58:37.315498 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315501 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315505 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315509 | orchestrator | 2025-05-17 00:58:37.315513 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-17 00:58:37.315516 | orchestrator | Saturday 17 May 2025 00:57:38 +0000 (0:00:00.307) 0:11:51.172 ********** 2025-05-17 00:58:37.315520 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315524 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315527 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315531 | orchestrator | 2025-05-17 00:58:37.315535 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-17 00:58:37.315538 | orchestrator | Saturday 17 May 2025 00:57:39 +0000 (0:00:00.630) 0:11:51.803 ********** 2025-05-17 00:58:37.315542 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315546 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315550 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315553 | orchestrator | 2025-05-17 00:58:37.315557 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-17 00:58:37.315560 | orchestrator | Saturday 17 May 2025 00:57:39 +0000 (0:00:00.317) 0:11:52.121 ********** 2025-05-17 00:58:37.315564 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.315568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.315572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.315575 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315579 | orchestrator | 2025-05-17 00:58:37.315583 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-17 00:58:37.315586 | orchestrator | Saturday 17 May 2025 00:57:40 +0000 (0:00:00.435) 0:11:52.556 ********** 2025-05-17 00:58:37.315590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.315597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.315601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.315604 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315608 | orchestrator | 2025-05-17 00:58:37.315613 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-17 00:58:37.315619 | orchestrator | Saturday 17 May 2025 00:57:40 +0000 (0:00:00.414) 0:11:52.971 ********** 2025-05-17 00:58:37.315624 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.315628 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.315632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.315635 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315639 | orchestrator | 2025-05-17 00:58:37.315643 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.315647 | orchestrator | Saturday 17 May 2025 00:57:41 +0000 (0:00:00.388) 0:11:53.359 ********** 2025-05-17 00:58:37.315650 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315654 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315658 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315662 | orchestrator | 2025-05-17 00:58:37.315665 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-17 00:58:37.315669 | orchestrator | Saturday 17 May 2025 00:57:41 +0000 (0:00:00.349) 0:11:53.709 ********** 2025-05-17 00:58:37.315673 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-17 00:58:37.315677 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315681 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-17 00:58:37.315687 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315693 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-17 00:58:37.315699 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315705 | orchestrator | 2025-05-17 00:58:37.315710 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-17 00:58:37.315716 | orchestrator | Saturday 17 May 2025 00:57:42 +0000 (0:00:00.723) 0:11:54.432 ********** 2025-05-17 00:58:37.315722 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315728 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315733 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315739 | orchestrator | 2025-05-17 00:58:37.315745 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 00:58:37.315751 | orchestrator | Saturday 17 May 2025 00:57:42 +0000 (0:00:00.329) 0:11:54.762 ********** 2025-05-17 00:58:37.315758 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315763 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315769 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315776 | orchestrator | 2025-05-17 00:58:37.315782 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-17 00:58:37.315789 | orchestrator | Saturday 17 May 2025 00:57:42 +0000 (0:00:00.332) 0:11:55.094 ********** 2025-05-17 00:58:37.315798 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-17 00:58:37.315805 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315811 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-17 00:58:37.315817 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315824 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-17 00:58:37.315830 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315836 | orchestrator | 2025-05-17 00:58:37.315841 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-17 00:58:37.315847 | orchestrator | Saturday 17 May 2025 00:57:43 +0000 (0:00:00.419) 0:11:55.515 ********** 2025-05-17 00:58:37.315852 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.315859 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315869 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.315879 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315885 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-17 00:58:37.315891 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315896 | orchestrator | 2025-05-17 00:58:37.315902 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-17 00:58:37.315908 | orchestrator | Saturday 17 May 2025 00:57:43 +0000 (0:00:00.604) 0:11:56.119 ********** 2025-05-17 00:58:37.315914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.315935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.315941 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.315947 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.315952 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-17 00:58:37.315958 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-17 00:58:37.315964 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-17 00:58:37.315970 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.315976 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-17 00:58:37.315981 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-17 00:58:37.315987 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-17 00:58:37.315993 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.315998 | orchestrator | 2025-05-17 00:58:37.316004 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-17 00:58:37.316010 | orchestrator | Saturday 17 May 2025 00:57:44 +0000 (0:00:00.604) 0:11:56.724 ********** 2025-05-17 00:58:37.316016 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.316022 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.316027 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.316033 | orchestrator | 2025-05-17 00:58:37.316039 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-17 00:58:37.316045 | orchestrator | Saturday 17 May 2025 00:57:45 +0000 (0:00:00.807) 0:11:57.531 ********** 2025-05-17 00:58:37.316051 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-17 00:58:37.316056 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.316062 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-17 00:58:37.316067 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.316073 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-17 00:58:37.316079 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.316085 | orchestrator | 2025-05-17 00:58:37.316091 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-17 00:58:37.316097 | orchestrator | Saturday 17 May 2025 00:57:45 +0000 (0:00:00.593) 0:11:58.124 ********** 2025-05-17 00:58:37.316103 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.316109 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.316116 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.316122 | orchestrator | 2025-05-17 00:58:37.316129 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-17 00:58:37.316135 | orchestrator | Saturday 17 May 2025 00:57:46 +0000 (0:00:00.765) 0:11:58.890 ********** 2025-05-17 00:58:37.316140 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.316146 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.316153 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.316157 | orchestrator | 2025-05-17 00:58:37.316161 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-05-17 00:58:37.316165 | orchestrator | Saturday 17 May 2025 00:57:47 +0000 (0:00:00.562) 0:11:59.452 ********** 2025-05-17 00:58:37.316168 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.316179 | orchestrator | 2025-05-17 00:58:37.316183 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-05-17 00:58:37.316187 | orchestrator | Saturday 17 May 2025 00:57:47 +0000 (0:00:00.749) 0:12:00.202 ********** 2025-05-17 00:58:37.316190 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-05-17 00:58:37.316194 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-05-17 00:58:37.316198 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-05-17 00:58:37.316202 | orchestrator | 2025-05-17 00:58:37.316205 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-05-17 00:58:37.316210 | orchestrator | Saturday 17 May 2025 00:57:48 +0000 (0:00:00.717) 0:12:00.920 ********** 2025-05-17 00:58:37.316216 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 00:58:37.316222 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-17 00:58:37.316226 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-17 00:58:37.316229 | orchestrator | 2025-05-17 00:58:37.316237 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-05-17 00:58:37.316241 | orchestrator | Saturday 17 May 2025 00:57:50 +0000 (0:00:01.846) 0:12:02.767 ********** 2025-05-17 00:58:37.316245 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-17 00:58:37.316249 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-17 00:58:37.316252 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.316256 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-17 00:58:37.316260 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-17 00:58:37.316264 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.316268 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-17 00:58:37.316271 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-17 00:58:37.316275 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.316279 | orchestrator | 2025-05-17 00:58:37.316283 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-05-17 00:58:37.316289 | orchestrator | Saturday 17 May 2025 00:57:51 +0000 (0:00:01.191) 0:12:03.958 ********** 2025-05-17 00:58:37.316293 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.316297 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.316301 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.316305 | orchestrator | 2025-05-17 00:58:37.316308 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-05-17 00:58:37.316312 | orchestrator | Saturday 17 May 2025 00:57:52 +0000 (0:00:00.557) 0:12:04.515 ********** 2025-05-17 00:58:37.316316 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.316319 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.316323 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.316327 | orchestrator | 2025-05-17 00:58:37.316331 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-05-17 00:58:37.316334 | orchestrator | Saturday 17 May 2025 00:57:52 +0000 (0:00:00.312) 0:12:04.828 ********** 2025-05-17 00:58:37.316338 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-17 00:58:37.316342 | orchestrator | 2025-05-17 00:58:37.316345 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-05-17 00:58:37.316349 | orchestrator | Saturday 17 May 2025 00:57:52 +0000 (0:00:00.224) 0:12:05.052 ********** 2025-05-17 00:58:37.316353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-17 00:58:37.316357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-17 00:58:37.316361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-17 00:58:37.316368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-17 00:58:37.316372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-17 00:58:37.316376 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.316380 | orchestrator | 2025-05-17 00:58:37.316383 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-05-17 00:58:37.316387 | orchestrator | Saturday 17 May 2025 00:57:53 +0000 (0:00:00.905) 0:12:05.958 ********** 2025-05-17 00:58:37.316391 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-17 00:58:37.316395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-17 00:58:37.316398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-17 00:58:37.316402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-17 00:58:37.316406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-17 00:58:37.316410 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.316414 | orchestrator | 2025-05-17 00:58:37.316417 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-05-17 00:58:37.316421 | orchestrator | Saturday 17 May 2025 00:57:54 +0000 (0:00:00.939) 0:12:06.897 ********** 2025-05-17 00:58:37.316425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-17 00:58:37.316429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-17 00:58:37.316432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-17 00:58:37.316436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-17 00:58:37.316440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-17 00:58:37.316444 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.316447 | orchestrator | 2025-05-17 00:58:37.316451 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-05-17 00:58:37.316474 | orchestrator | Saturday 17 May 2025 00:57:55 +0000 (0:00:00.616) 0:12:07.514 ********** 2025-05-17 00:58:37.316478 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-17 00:58:37.316486 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-17 00:58:37.316492 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-17 00:58:37.316502 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-17 00:58:37.316508 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-17 00:58:37.316513 | orchestrator | 2025-05-17 00:58:37.316520 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-05-17 00:58:37.316531 | orchestrator | Saturday 17 May 2025 00:58:19 +0000 (0:00:24.483) 0:12:31.998 ********** 2025-05-17 00:58:37.316535 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.316539 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.316543 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.316546 | orchestrator | 2025-05-17 00:58:37.316550 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-05-17 00:58:37.316554 | orchestrator | Saturday 17 May 2025 00:58:20 +0000 (0:00:00.463) 0:12:32.461 ********** 2025-05-17 00:58:37.316557 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.316561 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.316565 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.316569 | orchestrator | 2025-05-17 00:58:37.316572 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-05-17 00:58:37.316576 | orchestrator | Saturday 17 May 2025 00:58:20 +0000 (0:00:00.310) 0:12:32.771 ********** 2025-05-17 00:58:37.316580 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.316583 | orchestrator | 2025-05-17 00:58:37.316587 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-05-17 00:58:37.316591 | orchestrator | Saturday 17 May 2025 00:58:21 +0000 (0:00:00.559) 0:12:33.331 ********** 2025-05-17 00:58:37.316594 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.316598 | orchestrator | 2025-05-17 00:58:37.316602 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-05-17 00:58:37.316606 | orchestrator | Saturday 17 May 2025 00:58:21 +0000 (0:00:00.761) 0:12:34.093 ********** 2025-05-17 00:58:37.316609 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.316613 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.316617 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.316620 | orchestrator | 2025-05-17 00:58:37.316624 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-05-17 00:58:37.316628 | orchestrator | Saturday 17 May 2025 00:58:23 +0000 (0:00:01.216) 0:12:35.309 ********** 2025-05-17 00:58:37.316631 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.316635 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.316639 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.316642 | orchestrator | 2025-05-17 00:58:37.316646 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-05-17 00:58:37.316650 | orchestrator | Saturday 17 May 2025 00:58:24 +0000 (0:00:01.211) 0:12:36.521 ********** 2025-05-17 00:58:37.316653 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.316657 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.316661 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.316664 | orchestrator | 2025-05-17 00:58:37.316668 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-05-17 00:58:37.316672 | orchestrator | Saturday 17 May 2025 00:58:26 +0000 (0:00:02.018) 0:12:38.540 ********** 2025-05-17 00:58:37.316675 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-17 00:58:37.316679 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-17 00:58:37.316683 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-17 00:58:37.316687 | orchestrator | 2025-05-17 00:58:37.316690 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-05-17 00:58:37.316694 | orchestrator | Saturday 17 May 2025 00:58:28 +0000 (0:00:01.969) 0:12:40.509 ********** 2025-05-17 00:58:37.316698 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.316701 | orchestrator | skipping: [testbed-node-4] 2025-05-17 00:58:37.316705 | orchestrator | skipping: [testbed-node-5] 2025-05-17 00:58:37.316712 | orchestrator | 2025-05-17 00:58:37.316716 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-17 00:58:37.316719 | orchestrator | Saturday 17 May 2025 00:58:29 +0000 (0:00:01.177) 0:12:41.686 ********** 2025-05-17 00:58:37.316723 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.316727 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.316731 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.316734 | orchestrator | 2025-05-17 00:58:37.316738 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-17 00:58:37.316745 | orchestrator | Saturday 17 May 2025 00:58:30 +0000 (0:00:00.680) 0:12:42.367 ********** 2025-05-17 00:58:37.316749 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 00:58:37.316752 | orchestrator | 2025-05-17 00:58:37.316756 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-17 00:58:37.316760 | orchestrator | Saturday 17 May 2025 00:58:30 +0000 (0:00:00.879) 0:12:43.246 ********** 2025-05-17 00:58:37.316763 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.316767 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.316771 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.316775 | orchestrator | 2025-05-17 00:58:37.316778 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-17 00:58:37.316782 | orchestrator | Saturday 17 May 2025 00:58:31 +0000 (0:00:00.369) 0:12:43.616 ********** 2025-05-17 00:58:37.316786 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.316789 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.316795 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.316799 | orchestrator | 2025-05-17 00:58:37.316803 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-17 00:58:37.316806 | orchestrator | Saturday 17 May 2025 00:58:32 +0000 (0:00:01.482) 0:12:45.098 ********** 2025-05-17 00:58:37.316810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 00:58:37.316814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 00:58:37.316817 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 00:58:37.316821 | orchestrator | skipping: [testbed-node-3] 2025-05-17 00:58:37.316825 | orchestrator | 2025-05-17 00:58:37.316829 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-17 00:58:37.316832 | orchestrator | Saturday 17 May 2025 00:58:33 +0000 (0:00:00.640) 0:12:45.739 ********** 2025-05-17 00:58:37.316836 | orchestrator | ok: [testbed-node-3] 2025-05-17 00:58:37.316840 | orchestrator | ok: [testbed-node-4] 2025-05-17 00:58:37.316843 | orchestrator | ok: [testbed-node-5] 2025-05-17 00:58:37.316847 | orchestrator | 2025-05-17 00:58:37.316851 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-17 00:58:37.316855 | orchestrator | Saturday 17 May 2025 00:58:33 +0000 (0:00:00.345) 0:12:46.085 ********** 2025-05-17 00:58:37.316858 | orchestrator | changed: [testbed-node-3] 2025-05-17 00:58:37.316862 | orchestrator | changed: [testbed-node-4] 2025-05-17 00:58:37.316866 | orchestrator | changed: [testbed-node-5] 2025-05-17 00:58:37.316869 | orchestrator | 2025-05-17 00:58:37.316873 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 00:58:37.316877 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-05-17 00:58:37.316882 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-05-17 00:58:37.316885 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-05-17 00:58:37.316889 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-05-17 00:58:37.316904 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-05-17 00:58:37.316909 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-05-17 00:58:37.316912 | orchestrator | 2025-05-17 00:58:37.316916 | orchestrator | 2025-05-17 00:58:37.317027 | orchestrator | 2025-05-17 00:58:37.317046 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 00:58:37.317050 | orchestrator | Saturday 17 May 2025 00:58:35 +0000 (0:00:01.299) 0:12:47.384 ********** 2025-05-17 00:58:37.317054 | orchestrator | =============================================================================== 2025-05-17 00:58:37.317057 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 48.81s 2025-05-17 00:58:37.317061 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 40.15s 2025-05-17 00:58:37.317065 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 24.48s 2025-05-17 00:58:37.317069 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.48s 2025-05-17 00:58:37.317073 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.05s 2025-05-17 00:58:37.317076 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.27s 2025-05-17 00:58:37.317080 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.44s 2025-05-17 00:58:37.317084 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 8.32s 2025-05-17 00:58:37.317088 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.21s 2025-05-17 00:58:37.317091 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.45s 2025-05-17 00:58:37.317095 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.21s 2025-05-17 00:58:37.317099 | orchestrator | ceph-config : create ceph initial directories --------------------------- 6.15s 2025-05-17 00:58:37.317102 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 6.08s 2025-05-17 00:58:37.317106 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 5.01s 2025-05-17 00:58:37.317117 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 4.73s 2025-05-17 00:58:37.317121 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 3.91s 2025-05-17 00:58:37.317124 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.86s 2025-05-17 00:58:37.317128 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.50s 2025-05-17 00:58:37.317132 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 3.35s 2025-05-17 00:58:37.317135 | orchestrator | ceph-osd : apply operating system tuning -------------------------------- 3.05s 2025-05-17 00:58:37.317139 | orchestrator | 2025-05-17 00:58:37 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:58:37.317143 | orchestrator | 2025-05-17 00:58:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:40.349796 | orchestrator | 2025-05-17 00:58:40 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:58:40.352487 | orchestrator | 2025-05-17 00:58:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:40.354849 | orchestrator | 2025-05-17 00:58:40 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:58:40.355808 | orchestrator | 2025-05-17 00:58:40 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:58:40.355836 | orchestrator | 2025-05-17 00:58:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:43.398210 | orchestrator | 2025-05-17 00:58:43 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:58:43.398643 | orchestrator | 2025-05-17 00:58:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:43.398983 | orchestrator | 2025-05-17 00:58:43 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:58:43.402185 | orchestrator | 2025-05-17 00:58:43 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:58:43.402229 | orchestrator | 2025-05-17 00:58:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:46.447170 | orchestrator | 2025-05-17 00:58:46 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:58:46.447291 | orchestrator | 2025-05-17 00:58:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:46.447955 | orchestrator | 2025-05-17 00:58:46 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:58:46.448672 | orchestrator | 2025-05-17 00:58:46 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:58:46.450659 | orchestrator | 2025-05-17 00:58:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:49.486147 | orchestrator | 2025-05-17 00:58:49 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:58:49.486496 | orchestrator | 2025-05-17 00:58:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:49.487453 | orchestrator | 2025-05-17 00:58:49 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:58:49.487953 | orchestrator | 2025-05-17 00:58:49 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:58:49.487987 | orchestrator | 2025-05-17 00:58:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:52.528874 | orchestrator | 2025-05-17 00:58:52 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:58:52.529051 | orchestrator | 2025-05-17 00:58:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:52.529193 | orchestrator | 2025-05-17 00:58:52 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:58:52.530195 | orchestrator | 2025-05-17 00:58:52 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:58:52.530225 | orchestrator | 2025-05-17 00:58:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:55.567727 | orchestrator | 2025-05-17 00:58:55 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:58:55.567832 | orchestrator | 2025-05-17 00:58:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:55.569298 | orchestrator | 2025-05-17 00:58:55 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:58:55.569890 | orchestrator | 2025-05-17 00:58:55 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:58:55.569906 | orchestrator | 2025-05-17 00:58:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:58:58.610168 | orchestrator | 2025-05-17 00:58:58 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:58:58.611677 | orchestrator | 2025-05-17 00:58:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:58:58.613208 | orchestrator | 2025-05-17 00:58:58 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:58:58.614779 | orchestrator | 2025-05-17 00:58:58 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:58:58.614860 | orchestrator | 2025-05-17 00:58:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:01.658901 | orchestrator | 2025-05-17 00:59:01 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:01.659034 | orchestrator | 2025-05-17 00:59:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:01.659759 | orchestrator | 2025-05-17 00:59:01 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:01.661482 | orchestrator | 2025-05-17 00:59:01 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:01.661503 | orchestrator | 2025-05-17 00:59:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:04.713082 | orchestrator | 2025-05-17 00:59:04 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:04.713422 | orchestrator | 2025-05-17 00:59:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:04.714643 | orchestrator | 2025-05-17 00:59:04 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:04.715156 | orchestrator | 2025-05-17 00:59:04 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:04.715169 | orchestrator | 2025-05-17 00:59:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:07.760652 | orchestrator | 2025-05-17 00:59:07 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:07.763584 | orchestrator | 2025-05-17 00:59:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:07.764053 | orchestrator | 2025-05-17 00:59:07 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:07.764904 | orchestrator | 2025-05-17 00:59:07 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:07.767702 | orchestrator | 2025-05-17 00:59:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:10.804588 | orchestrator | 2025-05-17 00:59:10 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:10.805097 | orchestrator | 2025-05-17 00:59:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:10.805881 | orchestrator | 2025-05-17 00:59:10 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:10.807661 | orchestrator | 2025-05-17 00:59:10 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:10.807711 | orchestrator | 2025-05-17 00:59:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:13.863826 | orchestrator | 2025-05-17 00:59:13 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:13.864017 | orchestrator | 2025-05-17 00:59:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:13.864774 | orchestrator | 2025-05-17 00:59:13 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:13.865225 | orchestrator | 2025-05-17 00:59:13 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:13.865268 | orchestrator | 2025-05-17 00:59:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:16.893960 | orchestrator | 2025-05-17 00:59:16 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:16.897704 | orchestrator | 2025-05-17 00:59:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:16.897769 | orchestrator | 2025-05-17 00:59:16 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:16.897815 | orchestrator | 2025-05-17 00:59:16 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:16.897828 | orchestrator | 2025-05-17 00:59:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:19.937361 | orchestrator | 2025-05-17 00:59:19 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:19.938629 | orchestrator | 2025-05-17 00:59:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:19.939173 | orchestrator | 2025-05-17 00:59:19 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:19.940726 | orchestrator | 2025-05-17 00:59:19 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:19.940814 | orchestrator | 2025-05-17 00:59:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:22.992266 | orchestrator | 2025-05-17 00:59:22 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:22.993244 | orchestrator | 2025-05-17 00:59:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:22.994639 | orchestrator | 2025-05-17 00:59:22 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:22.995581 | orchestrator | 2025-05-17 00:59:22 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:22.995605 | orchestrator | 2025-05-17 00:59:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:26.052692 | orchestrator | 2025-05-17 00:59:26 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:26.054237 | orchestrator | 2025-05-17 00:59:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:26.056511 | orchestrator | 2025-05-17 00:59:26 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:26.057991 | orchestrator | 2025-05-17 00:59:26 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:26.058082 | orchestrator | 2025-05-17 00:59:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:29.105616 | orchestrator | 2025-05-17 00:59:29 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:29.106412 | orchestrator | 2025-05-17 00:59:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:29.110226 | orchestrator | 2025-05-17 00:59:29 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:29.112053 | orchestrator | 2025-05-17 00:59:29 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:29.112079 | orchestrator | 2025-05-17 00:59:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:32.165023 | orchestrator | 2025-05-17 00:59:32 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:32.166808 | orchestrator | 2025-05-17 00:59:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:32.168327 | orchestrator | 2025-05-17 00:59:32 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:32.169808 | orchestrator | 2025-05-17 00:59:32 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:32.169839 | orchestrator | 2025-05-17 00:59:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:35.222239 | orchestrator | 2025-05-17 00:59:35 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:35.222367 | orchestrator | 2025-05-17 00:59:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:35.222783 | orchestrator | 2025-05-17 00:59:35 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:35.224185 | orchestrator | 2025-05-17 00:59:35 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:35.224214 | orchestrator | 2025-05-17 00:59:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:38.278379 | orchestrator | 2025-05-17 00:59:38 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:38.282273 | orchestrator | 2025-05-17 00:59:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:38.284117 | orchestrator | 2025-05-17 00:59:38 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:38.286615 | orchestrator | 2025-05-17 00:59:38 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:38.286675 | orchestrator | 2025-05-17 00:59:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:41.331546 | orchestrator | 2025-05-17 00:59:41 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:41.333249 | orchestrator | 2025-05-17 00:59:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:41.334594 | orchestrator | 2025-05-17 00:59:41 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:41.335997 | orchestrator | 2025-05-17 00:59:41 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:41.336029 | orchestrator | 2025-05-17 00:59:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:44.377526 | orchestrator | 2025-05-17 00:59:44 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:44.378966 | orchestrator | 2025-05-17 00:59:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:44.381223 | orchestrator | 2025-05-17 00:59:44 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:44.383006 | orchestrator | 2025-05-17 00:59:44 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:44.383050 | orchestrator | 2025-05-17 00:59:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:47.435692 | orchestrator | 2025-05-17 00:59:47 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:47.436740 | orchestrator | 2025-05-17 00:59:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:47.438233 | orchestrator | 2025-05-17 00:59:47 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:47.439805 | orchestrator | 2025-05-17 00:59:47 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:47.439843 | orchestrator | 2025-05-17 00:59:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:50.490353 | orchestrator | 2025-05-17 00:59:50 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:50.491987 | orchestrator | 2025-05-17 00:59:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:50.493712 | orchestrator | 2025-05-17 00:59:50 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:50.495534 | orchestrator | 2025-05-17 00:59:50 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:50.495636 | orchestrator | 2025-05-17 00:59:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:53.542718 | orchestrator | 2025-05-17 00:59:53 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:53.544323 | orchestrator | 2025-05-17 00:59:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:53.546181 | orchestrator | 2025-05-17 00:59:53 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:53.548487 | orchestrator | 2025-05-17 00:59:53 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:53.548580 | orchestrator | 2025-05-17 00:59:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:56.600060 | orchestrator | 2025-05-17 00:59:56 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:56.601421 | orchestrator | 2025-05-17 00:59:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:56.603040 | orchestrator | 2025-05-17 00:59:56 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:56.604597 | orchestrator | 2025-05-17 00:59:56 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:56.604643 | orchestrator | 2025-05-17 00:59:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 00:59:59.649542 | orchestrator | 2025-05-17 00:59:59 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 00:59:59.654805 | orchestrator | 2025-05-17 00:59:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 00:59:59.656387 | orchestrator | 2025-05-17 00:59:59 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 00:59:59.658664 | orchestrator | 2025-05-17 00:59:59 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 00:59:59.658704 | orchestrator | 2025-05-17 00:59:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:02.706128 | orchestrator | 2025-05-17 01:00:02 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 01:00:02.706536 | orchestrator | 2025-05-17 01:00:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:02.707947 | orchestrator | 2025-05-17 01:00:02 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 01:00:02.709136 | orchestrator | 2025-05-17 01:00:02 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:02.709167 | orchestrator | 2025-05-17 01:00:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:05.764405 | orchestrator | 2025-05-17 01:00:05 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 01:00:05.765972 | orchestrator | 2025-05-17 01:00:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:05.767559 | orchestrator | 2025-05-17 01:00:05 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 01:00:05.769057 | orchestrator | 2025-05-17 01:00:05 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:05.769084 | orchestrator | 2025-05-17 01:00:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:08.833571 | orchestrator | 2025-05-17 01:00:08 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 01:00:08.835183 | orchestrator | 2025-05-17 01:00:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:08.837047 | orchestrator | 2025-05-17 01:00:08 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 01:00:08.838735 | orchestrator | 2025-05-17 01:00:08 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:08.838773 | orchestrator | 2025-05-17 01:00:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:11.892777 | orchestrator | 2025-05-17 01:00:11 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 01:00:11.894318 | orchestrator | 2025-05-17 01:00:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:11.894799 | orchestrator | 2025-05-17 01:00:11 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 01:00:11.896386 | orchestrator | 2025-05-17 01:00:11 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:11.896445 | orchestrator | 2025-05-17 01:00:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:14.942331 | orchestrator | 2025-05-17 01:00:14 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 01:00:14.943401 | orchestrator | 2025-05-17 01:00:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:14.945501 | orchestrator | 2025-05-17 01:00:14 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 01:00:14.947845 | orchestrator | 2025-05-17 01:00:14 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:14.948009 | orchestrator | 2025-05-17 01:00:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:17.994315 | orchestrator | 2025-05-17 01:00:17 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state STARTED 2025-05-17 01:00:17.996159 | orchestrator | 2025-05-17 01:00:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:17.997171 | orchestrator | 2025-05-17 01:00:17 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 01:00:17.998782 | orchestrator | 2025-05-17 01:00:17 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:17.998818 | orchestrator | 2025-05-17 01:00:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:21.050259 | orchestrator | 2025-05-17 01:00:21 | INFO  | Task fdbd2121-44d5-40e4-b2bb-5454b98e4adf is in state SUCCESS 2025-05-17 01:00:21.051299 | orchestrator | 2025-05-17 01:00:21.051409 | orchestrator | 2025-05-17 01:00:21.051439 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 01:00:21.051461 | orchestrator | 2025-05-17 01:00:21.051480 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 01:00:21.051502 | orchestrator | Saturday 17 May 2025 00:58:36 +0000 (0:00:00.301) 0:00:00.301 ********** 2025-05-17 01:00:21.051523 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:00:21.051545 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:00:21.051565 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:00:21.051587 | orchestrator | 2025-05-17 01:00:21.051606 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 01:00:21.051626 | orchestrator | Saturday 17 May 2025 00:58:37 +0000 (0:00:00.386) 0:00:00.688 ********** 2025-05-17 01:00:21.051646 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-17 01:00:21.051665 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-17 01:00:21.051684 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-17 01:00:21.051703 | orchestrator | 2025-05-17 01:00:21.051723 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-17 01:00:21.051744 | orchestrator | 2025-05-17 01:00:21.051766 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-17 01:00:21.051787 | orchestrator | Saturday 17 May 2025 00:58:37 +0000 (0:00:00.292) 0:00:00.980 ********** 2025-05-17 01:00:21.051844 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:00:21.051866 | orchestrator | 2025-05-17 01:00:21.051928 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-17 01:00:21.051949 | orchestrator | Saturday 17 May 2025 00:58:38 +0000 (0:00:00.770) 0:00:01.751 ********** 2025-05-17 01:00:21.051997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-17 01:00:21.052061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-17 01:00:21.052098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-17 01:00:21.052122 | orchestrator | 2025-05-17 01:00:21.052142 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-17 01:00:21.052160 | orchestrator | Saturday 17 May 2025 00:58:40 +0000 (0:00:01.829) 0:00:03.581 ********** 2025-05-17 01:00:21.052178 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:00:21.052225 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:00:21.052243 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:00:21.052260 | orchestrator | 2025-05-17 01:00:21.052278 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-17 01:00:21.052296 | orchestrator | Saturday 17 May 2025 00:58:40 +0000 (0:00:00.288) 0:00:03.869 ********** 2025-05-17 01:00:21.052324 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-17 01:00:21.052343 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-17 01:00:21.052361 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-17 01:00:21.052379 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-17 01:00:21.052406 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-17 01:00:21.052424 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-17 01:00:21.052442 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-17 01:00:21.052460 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-17 01:00:21.052477 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-17 01:00:21.052495 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-17 01:00:21.052512 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-17 01:00:21.052530 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-17 01:00:21.052548 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-17 01:00:21.052566 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-17 01:00:21.052584 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-17 01:00:21.052609 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-17 01:00:21.052627 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-17 01:00:21.052645 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-17 01:00:21.052663 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-17 01:00:21.052681 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-17 01:00:21.052698 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-17 01:00:21.052717 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-17 01:00:21.052737 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-17 01:00:21.052756 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-17 01:00:21.052775 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-17 01:00:21.052795 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-05-17 01:00:21.052817 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-17 01:00:21.052836 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-17 01:00:21.052856 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-17 01:00:21.052876 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-17 01:00:21.052924 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-17 01:00:21.052942 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-17 01:00:21.052975 | orchestrator | 2025-05-17 01:00:21.052996 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-17 01:00:21.053015 | orchestrator | Saturday 17 May 2025 00:58:41 +0000 (0:00:00.928) 0:00:04.798 ********** 2025-05-17 01:00:21.053034 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:00:21.053054 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:00:21.053072 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:00:21.053090 | orchestrator | 2025-05-17 01:00:21.053110 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-17 01:00:21.053128 | orchestrator | Saturday 17 May 2025 00:58:41 +0000 (0:00:00.457) 0:00:05.255 ********** 2025-05-17 01:00:21.053148 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.053168 | orchestrator | 2025-05-17 01:00:21.053201 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-17 01:00:21.053221 | orchestrator | Saturday 17 May 2025 00:58:42 +0000 (0:00:00.143) 0:00:05.398 ********** 2025-05-17 01:00:21.053240 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.053262 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:00:21.053282 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:00:21.053302 | orchestrator | 2025-05-17 01:00:21.053321 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-17 01:00:21.053340 | orchestrator | Saturday 17 May 2025 00:58:42 +0000 (0:00:00.415) 0:00:05.814 ********** 2025-05-17 01:00:21.053360 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:00:21.053379 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:00:21.053398 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:00:21.053416 | orchestrator | 2025-05-17 01:00:21.053436 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-17 01:00:21.053455 | orchestrator | Saturday 17 May 2025 00:58:42 +0000 (0:00:00.298) 0:00:06.112 ********** 2025-05-17 01:00:21.053474 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.053494 | orchestrator | 2025-05-17 01:00:21.053514 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-17 01:00:21.053535 | orchestrator | Saturday 17 May 2025 00:58:42 +0000 (0:00:00.116) 0:00:06.228 ********** 2025-05-17 01:00:21.053554 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.053642 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:00:21.053689 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:00:21.053711 | orchestrator | 2025-05-17 01:00:21.053731 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-17 01:00:21.053752 | orchestrator | Saturday 17 May 2025 00:58:43 +0000 (0:00:00.524) 0:00:06.753 ********** 2025-05-17 01:00:21.053770 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:00:21.053790 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:00:21.053810 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:00:21.053830 | orchestrator | 2025-05-17 01:00:21.053850 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-17 01:00:21.053949 | orchestrator | Saturday 17 May 2025 00:58:43 +0000 (0:00:00.493) 0:00:07.246 ********** 2025-05-17 01:00:21.053976 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.053996 | orchestrator | 2025-05-17 01:00:21.054073 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-17 01:00:21.054103 | orchestrator | Saturday 17 May 2025 00:58:44 +0000 (0:00:00.130) 0:00:07.376 ********** 2025-05-17 01:00:21.054124 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.054144 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:00:21.054165 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:00:21.054185 | orchestrator | 2025-05-17 01:00:21.054205 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-17 01:00:21.054225 | orchestrator | Saturday 17 May 2025 00:58:44 +0000 (0:00:00.419) 0:00:07.796 ********** 2025-05-17 01:00:21.054246 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:00:21.054266 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:00:21.054286 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:00:21.054305 | orchestrator | 2025-05-17 01:00:21.054342 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-17 01:00:21.054363 | orchestrator | Saturday 17 May 2025 00:58:44 +0000 (0:00:00.445) 0:00:08.242 ********** 2025-05-17 01:00:21.054382 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.054404 | orchestrator | 2025-05-17 01:00:21.054425 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-17 01:00:21.054443 | orchestrator | Saturday 17 May 2025 00:58:45 +0000 (0:00:00.120) 0:00:08.362 ********** 2025-05-17 01:00:21.054463 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.054482 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:00:21.054500 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:00:21.054519 | orchestrator | 2025-05-17 01:00:21.054537 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-17 01:00:21.054555 | orchestrator | Saturday 17 May 2025 00:58:45 +0000 (0:00:00.452) 0:00:08.814 ********** 2025-05-17 01:00:21.054573 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:00:21.054591 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:00:21.054608 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:00:21.054626 | orchestrator | 2025-05-17 01:00:21.054644 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-17 01:00:21.054661 | orchestrator | Saturday 17 May 2025 00:58:45 +0000 (0:00:00.305) 0:00:09.119 ********** 2025-05-17 01:00:21.054680 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.054699 | orchestrator | 2025-05-17 01:00:21.054717 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-17 01:00:21.054736 | orchestrator | Saturday 17 May 2025 00:58:46 +0000 (0:00:00.253) 0:00:09.373 ********** 2025-05-17 01:00:21.054753 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.054771 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:00:21.054790 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:00:21.054808 | orchestrator | 2025-05-17 01:00:21.054827 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-17 01:00:21.054845 | orchestrator | Saturday 17 May 2025 00:58:46 +0000 (0:00:00.288) 0:00:09.661 ********** 2025-05-17 01:00:21.054863 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:00:21.054907 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:00:21.054927 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:00:21.054944 | orchestrator | 2025-05-17 01:00:21.054963 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-17 01:00:21.054979 | orchestrator | Saturday 17 May 2025 00:58:46 +0000 (0:00:00.630) 0:00:10.292 ********** 2025-05-17 01:00:21.054997 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.055014 | orchestrator | 2025-05-17 01:00:21.055032 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-17 01:00:21.055050 | orchestrator | Saturday 17 May 2025 00:58:47 +0000 (0:00:00.124) 0:00:10.416 ********** 2025-05-17 01:00:21.055069 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.055086 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:00:21.055104 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:00:21.055122 | orchestrator | 2025-05-17 01:00:21.055140 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-17 01:00:21.055158 | orchestrator | Saturday 17 May 2025 00:58:47 +0000 (0:00:00.571) 0:00:10.988 ********** 2025-05-17 01:00:21.055197 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:00:21.055215 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:00:21.055234 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:00:21.055250 | orchestrator | 2025-05-17 01:00:21.055267 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-17 01:00:21.055285 | orchestrator | Saturday 17 May 2025 00:58:48 +0000 (0:00:00.444) 0:00:11.432 ********** 2025-05-17 01:00:21.055303 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.055404 | orchestrator | 2025-05-17 01:00:21.055429 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-17 01:00:21.055444 | orchestrator | Saturday 17 May 2025 00:58:48 +0000 (0:00:00.107) 0:00:11.540 ********** 2025-05-17 01:00:21.055470 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.055484 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:00:21.055498 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:00:21.055512 | orchestrator | 2025-05-17 01:00:21.055528 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-17 01:00:21.055543 | orchestrator | Saturday 17 May 2025 00:58:48 +0000 (0:00:00.385) 0:00:11.926 ********** 2025-05-17 01:00:21.055558 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:00:21.055573 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:00:21.055587 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:00:21.055601 | orchestrator | 2025-05-17 01:00:21.055616 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-17 01:00:21.055630 | orchestrator | Saturday 17 May 2025 00:58:48 +0000 (0:00:00.329) 0:00:12.255 ********** 2025-05-17 01:00:21.055644 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.055658 | orchestrator | 2025-05-17 01:00:21.055673 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-17 01:00:21.055686 | orchestrator | Saturday 17 May 2025 00:58:49 +0000 (0:00:00.242) 0:00:12.498 ********** 2025-05-17 01:00:21.055700 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.055715 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:00:21.055729 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:00:21.055745 | orchestrator | 2025-05-17 01:00:21.055770 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-17 01:00:21.055786 | orchestrator | Saturday 17 May 2025 00:58:49 +0000 (0:00:00.278) 0:00:12.776 ********** 2025-05-17 01:00:21.055801 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:00:21.055815 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:00:21.055830 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:00:21.055846 | orchestrator | 2025-05-17 01:00:21.055861 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-17 01:00:21.055875 | orchestrator | Saturday 17 May 2025 00:58:49 +0000 (0:00:00.428) 0:00:13.205 ********** 2025-05-17 01:00:21.055959 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.055975 | orchestrator | 2025-05-17 01:00:21.055990 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-17 01:00:21.056004 | orchestrator | Saturday 17 May 2025 00:58:49 +0000 (0:00:00.112) 0:00:13.317 ********** 2025-05-17 01:00:21.056019 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.056033 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:00:21.056047 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:00:21.056062 | orchestrator | 2025-05-17 01:00:21.056077 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-17 01:00:21.056092 | orchestrator | Saturday 17 May 2025 00:58:50 +0000 (0:00:00.409) 0:00:13.727 ********** 2025-05-17 01:00:21.056107 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:00:21.056122 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:00:21.056137 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:00:21.056152 | orchestrator | 2025-05-17 01:00:21.056167 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-17 01:00:21.056181 | orchestrator | Saturday 17 May 2025 00:58:50 +0000 (0:00:00.426) 0:00:14.154 ********** 2025-05-17 01:00:21.056196 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.056210 | orchestrator | 2025-05-17 01:00:21.056225 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-17 01:00:21.056285 | orchestrator | Saturday 17 May 2025 00:58:50 +0000 (0:00:00.115) 0:00:14.269 ********** 2025-05-17 01:00:21.056302 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.056348 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:00:21.056364 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:00:21.056379 | orchestrator | 2025-05-17 01:00:21.056395 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-17 01:00:21.056408 | orchestrator | Saturday 17 May 2025 00:58:51 +0000 (0:00:00.428) 0:00:14.698 ********** 2025-05-17 01:00:21.056435 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:00:21.056450 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:00:21.056463 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:00:21.056477 | orchestrator | 2025-05-17 01:00:21.056491 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-17 01:00:21.056504 | orchestrator | Saturday 17 May 2025 00:58:51 +0000 (0:00:00.401) 0:00:15.099 ********** 2025-05-17 01:00:21.056517 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.056531 | orchestrator | 2025-05-17 01:00:21.056544 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-17 01:00:21.056558 | orchestrator | Saturday 17 May 2025 00:58:51 +0000 (0:00:00.110) 0:00:15.209 ********** 2025-05-17 01:00:21.056571 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.056585 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:00:21.056600 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:00:21.056614 | orchestrator | 2025-05-17 01:00:21.056629 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-17 01:00:21.056643 | orchestrator | Saturday 17 May 2025 00:58:52 +0000 (0:00:00.410) 0:00:15.619 ********** 2025-05-17 01:00:21.056658 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:00:21.056674 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:00:21.056689 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:00:21.056704 | orchestrator | 2025-05-17 01:00:21.056717 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-17 01:00:21.056732 | orchestrator | Saturday 17 May 2025 00:58:55 +0000 (0:00:02.760) 0:00:18.380 ********** 2025-05-17 01:00:21.056746 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-17 01:00:21.056780 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-17 01:00:21.056796 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-17 01:00:21.056809 | orchestrator | 2025-05-17 01:00:21.056824 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-17 01:00:21.056838 | orchestrator | Saturday 17 May 2025 00:58:58 +0000 (0:00:03.059) 0:00:21.440 ********** 2025-05-17 01:00:21.056853 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-17 01:00:21.056868 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-17 01:00:21.056906 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-17 01:00:21.056922 | orchestrator | 2025-05-17 01:00:21.056935 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-17 01:00:21.056949 | orchestrator | Saturday 17 May 2025 00:59:00 +0000 (0:00:02.722) 0:00:24.162 ********** 2025-05-17 01:00:21.056963 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-17 01:00:21.056977 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-17 01:00:21.056991 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-17 01:00:21.057005 | orchestrator | 2025-05-17 01:00:21.057019 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-17 01:00:21.057033 | orchestrator | Saturday 17 May 2025 00:59:03 +0000 (0:00:02.289) 0:00:26.452 ********** 2025-05-17 01:00:21.057048 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.057063 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:00:21.057078 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:00:21.057092 | orchestrator | 2025-05-17 01:00:21.057105 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-17 01:00:21.057118 | orchestrator | Saturday 17 May 2025 00:59:03 +0000 (0:00:00.657) 0:00:27.109 ********** 2025-05-17 01:00:21.057132 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.057158 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:00:21.057172 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:00:21.057184 | orchestrator | 2025-05-17 01:00:21.057198 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-17 01:00:21.057259 | orchestrator | Saturday 17 May 2025 00:59:04 +0000 (0:00:00.594) 0:00:27.704 ********** 2025-05-17 01:00:21.057276 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:00:21.057288 | orchestrator | 2025-05-17 01:00:21.057301 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-17 01:00:21.057316 | orchestrator | Saturday 17 May 2025 00:59:05 +0000 (0:00:00.766) 0:00:28.470 ********** 2025-05-17 01:00:21.057350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-17 01:00:21.057383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-17 01:00:21.057421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-17 01:00:21.057438 | orchestrator | 2025-05-17 01:00:21.057451 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-17 01:00:21.057465 | orchestrator | Saturday 17 May 2025 00:59:07 +0000 (0:00:02.515) 0:00:30.986 ********** 2025-05-17 01:00:21.057485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-17 01:00:21.057507 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.057825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-17 01:00:21.057849 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:00:21.057871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-17 01:00:21.057924 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:00:21.057940 | orchestrator | 2025-05-17 01:00:21.057954 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-17 01:00:21.057968 | orchestrator | Saturday 17 May 2025 00:59:08 +0000 (0:00:01.166) 0:00:32.153 ********** 2025-05-17 01:00:21.057999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-17 01:00:21.058083 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:00:21.058102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-17 01:00:21.058118 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.058185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-17 01:00:21.058210 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:00:21.058223 | orchestrator | 2025-05-17 01:00:21.058238 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-17 01:00:21.058253 | orchestrator | Saturday 17 May 2025 00:59:10 +0000 (0:00:01.330) 0:00:33.483 ********** 2025-05-17 01:00:21.058271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-17 01:00:21.058289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-17 01:00:21.058316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-17 01:00:21.058329 | orchestrator | 2025-05-17 01:00:21.058341 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-17 01:00:21.058353 | orchestrator | Saturday 17 May 2025 00:59:16 +0000 (0:00:06.366) 0:00:39.850 ********** 2025-05-17 01:00:21.058371 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:00:21.058383 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:00:21.058395 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:00:21.058407 | orchestrator | 2025-05-17 01:00:21.058419 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-17 01:00:21.058431 | orchestrator | Saturday 17 May 2025 00:59:16 +0000 (0:00:00.328) 0:00:40.178 ********** 2025-05-17 01:00:21.058443 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:00:21.058456 | orchestrator | 2025-05-17 01:00:21.058468 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-17 01:00:21.058481 | orchestrator | Saturday 17 May 2025 00:59:17 +0000 (0:00:00.542) 0:00:40.721 ********** 2025-05-17 01:00:21.058494 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:00:21.058507 | orchestrator | 2025-05-17 01:00:21.058519 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-17 01:00:21.058532 | orchestrator | Saturday 17 May 2025 00:59:19 +0000 (0:00:02.326) 0:00:43.047 ********** 2025-05-17 01:00:21.058545 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:00:21.058559 | orchestrator | 2025-05-17 01:00:21.058573 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-17 01:00:21.058591 | orchestrator | Saturday 17 May 2025 00:59:21 +0000 (0:00:02.103) 0:00:45.151 ********** 2025-05-17 01:00:21.058603 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:00:21.058615 | orchestrator | 2025-05-17 01:00:21.058627 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-17 01:00:21.058639 | orchestrator | Saturday 17 May 2025 00:59:35 +0000 (0:00:13.738) 0:00:58.890 ********** 2025-05-17 01:00:21.058652 | orchestrator | 2025-05-17 01:00:21.058664 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-17 01:00:21.058676 | orchestrator | Saturday 17 May 2025 00:59:35 +0000 (0:00:00.058) 0:00:58.949 ********** 2025-05-17 01:00:21.058688 | orchestrator | 2025-05-17 01:00:21.058701 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-17 01:00:21.058713 | orchestrator | Saturday 17 May 2025 00:59:35 +0000 (0:00:00.171) 0:00:59.120 ********** 2025-05-17 01:00:21.058726 | orchestrator | 2025-05-17 01:00:21.058738 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-17 01:00:21.058750 | orchestrator | Saturday 17 May 2025 00:59:35 +0000 (0:00:00.058) 0:00:59.179 ********** 2025-05-17 01:00:21.058762 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:00:21.058775 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:00:21.058788 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:00:21.058800 | orchestrator | 2025-05-17 01:00:21.058812 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:00:21.058825 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-17 01:00:21.058838 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-17 01:00:21.058850 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-17 01:00:21.058861 | orchestrator | 2025-05-17 01:00:21.058873 | orchestrator | 2025-05-17 01:00:21.058937 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:00:21.058951 | orchestrator | Saturday 17 May 2025 01:00:18 +0000 (0:00:42.289) 0:01:41.469 ********** 2025-05-17 01:00:21.058963 | orchestrator | =============================================================================== 2025-05-17 01:00:21.058976 | orchestrator | horizon : Restart horizon container ------------------------------------ 42.29s 2025-05-17 01:00:21.058988 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 13.74s 2025-05-17 01:00:21.059007 | orchestrator | horizon : Deploy horizon container -------------------------------------- 6.37s 2025-05-17 01:00:21.059020 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 3.06s 2025-05-17 01:00:21.059032 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.76s 2025-05-17 01:00:21.059043 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.72s 2025-05-17 01:00:21.059054 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.52s 2025-05-17 01:00:21.059065 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.33s 2025-05-17 01:00:21.059077 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.29s 2025-05-17 01:00:21.059089 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.10s 2025-05-17 01:00:21.059101 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.83s 2025-05-17 01:00:21.059113 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.33s 2025-05-17 01:00:21.059125 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 1.17s 2025-05-17 01:00:21.059144 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.93s 2025-05-17 01:00:21.059156 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2025-05-17 01:00:21.059168 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2025-05-17 01:00:21.059180 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.66s 2025-05-17 01:00:21.059192 | orchestrator | horizon : Update policy file name --------------------------------------- 0.63s 2025-05-17 01:00:21.059204 | orchestrator | horizon : Copying over custom themes ------------------------------------ 0.59s 2025-05-17 01:00:21.059215 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.57s 2025-05-17 01:00:21.059225 | orchestrator | 2025-05-17 01:00:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:21.059236 | orchestrator | 2025-05-17 01:00:21 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 01:00:21.059247 | orchestrator | 2025-05-17 01:00:21 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:21.059259 | orchestrator | 2025-05-17 01:00:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:24.106554 | orchestrator | 2025-05-17 01:00:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:24.112627 | orchestrator | 2025-05-17 01:00:24 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 01:00:24.113602 | orchestrator | 2025-05-17 01:00:24 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:24.114633 | orchestrator | 2025-05-17 01:00:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:27.273785 | orchestrator | 2025-05-17 01:00:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:27.274578 | orchestrator | 2025-05-17 01:00:27 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 01:00:27.275912 | orchestrator | 2025-05-17 01:00:27 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:27.275950 | orchestrator | 2025-05-17 01:00:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:30.410795 | orchestrator | 2025-05-17 01:00:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:30.410953 | orchestrator | 2025-05-17 01:00:30 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 01:00:30.410970 | orchestrator | 2025-05-17 01:00:30 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:30.411012 | orchestrator | 2025-05-17 01:00:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:33.464249 | orchestrator | 2025-05-17 01:00:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:33.465827 | orchestrator | 2025-05-17 01:00:33 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 01:00:33.467760 | orchestrator | 2025-05-17 01:00:33 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:33.467850 | orchestrator | 2025-05-17 01:00:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:36.519190 | orchestrator | 2025-05-17 01:00:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:36.520330 | orchestrator | 2025-05-17 01:00:36 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 01:00:36.522338 | orchestrator | 2025-05-17 01:00:36 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:36.522473 | orchestrator | 2025-05-17 01:00:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:39.571625 | orchestrator | 2025-05-17 01:00:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:39.572497 | orchestrator | 2025-05-17 01:00:39 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 01:00:39.573596 | orchestrator | 2025-05-17 01:00:39 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:39.573633 | orchestrator | 2025-05-17 01:00:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:42.619711 | orchestrator | 2025-05-17 01:00:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:42.621152 | orchestrator | 2025-05-17 01:00:42 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 01:00:42.622392 | orchestrator | 2025-05-17 01:00:42 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:42.622424 | orchestrator | 2025-05-17 01:00:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:45.675156 | orchestrator | 2025-05-17 01:00:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:45.676981 | orchestrator | 2025-05-17 01:00:45 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state STARTED 2025-05-17 01:00:45.678709 | orchestrator | 2025-05-17 01:00:45 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:45.678750 | orchestrator | 2025-05-17 01:00:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:48.725990 | orchestrator | 2025-05-17 01:00:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:48.731980 | orchestrator | 2025-05-17 01:00:48.732042 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-17 01:00:48.732057 | orchestrator | 2025-05-17 01:00:48.732068 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-17 01:00:48.732079 | orchestrator | 2025-05-17 01:00:48.732089 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-17 01:00:48.732099 | orchestrator | Saturday 17 May 2025 00:58:40 +0000 (0:00:01.262) 0:00:01.262 ********** 2025-05-17 01:00:48.732110 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 01:00:48.732121 | orchestrator | 2025-05-17 01:00:48.732131 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-17 01:00:48.732141 | orchestrator | Saturday 17 May 2025 00:58:40 +0000 (0:00:00.504) 0:00:01.767 ********** 2025-05-17 01:00:48.732184 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-05-17 01:00:48.732195 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-05-17 01:00:48.732204 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-05-17 01:00:48.732214 | orchestrator | 2025-05-17 01:00:48.732224 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-17 01:00:48.732234 | orchestrator | Saturday 17 May 2025 00:58:41 +0000 (0:00:00.821) 0:00:02.588 ********** 2025-05-17 01:00:48.732243 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 01:00:48.732253 | orchestrator | 2025-05-17 01:00:48.732263 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-17 01:00:48.732273 | orchestrator | Saturday 17 May 2025 00:58:42 +0000 (0:00:00.777) 0:00:03.366 ********** 2025-05-17 01:00:48.732283 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:00:48.732295 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:00:48.732305 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:00:48.732315 | orchestrator | 2025-05-17 01:00:48.732325 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-17 01:00:48.732334 | orchestrator | Saturday 17 May 2025 00:58:42 +0000 (0:00:00.667) 0:00:04.033 ********** 2025-05-17 01:00:48.732344 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:00:48.732353 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:00:48.732363 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:00:48.732372 | orchestrator | 2025-05-17 01:00:48.732381 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-17 01:00:48.732391 | orchestrator | Saturday 17 May 2025 00:58:43 +0000 (0:00:00.319) 0:00:04.353 ********** 2025-05-17 01:00:48.732400 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:00:48.732410 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:00:48.732419 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:00:48.732429 | orchestrator | 2025-05-17 01:00:48.732439 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-17 01:00:48.732448 | orchestrator | Saturday 17 May 2025 00:58:44 +0000 (0:00:00.880) 0:00:05.233 ********** 2025-05-17 01:00:48.732457 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:00:48.732467 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:00:48.732476 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:00:48.732485 | orchestrator | 2025-05-17 01:00:48.732512 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-17 01:00:48.732523 | orchestrator | Saturday 17 May 2025 00:58:44 +0000 (0:00:00.301) 0:00:05.535 ********** 2025-05-17 01:00:48.732544 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:00:48.732555 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:00:48.732564 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:00:48.732574 | orchestrator | 2025-05-17 01:00:48.732583 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-17 01:00:48.732594 | orchestrator | Saturday 17 May 2025 00:58:44 +0000 (0:00:00.292) 0:00:05.828 ********** 2025-05-17 01:00:48.732611 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:00:48.732628 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:00:48.732645 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:00:48.732661 | orchestrator | 2025-05-17 01:00:48.732694 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-17 01:00:48.732723 | orchestrator | Saturday 17 May 2025 00:58:45 +0000 (0:00:00.319) 0:00:06.148 ********** 2025-05-17 01:00:48.732741 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.732756 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.732766 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.732776 | orchestrator | 2025-05-17 01:00:48.732786 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-17 01:00:48.732795 | orchestrator | Saturday 17 May 2025 00:58:45 +0000 (0:00:00.525) 0:00:06.673 ********** 2025-05-17 01:00:48.732805 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:00:48.732815 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:00:48.732835 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:00:48.732845 | orchestrator | 2025-05-17 01:00:48.732854 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-17 01:00:48.732864 | orchestrator | Saturday 17 May 2025 00:58:45 +0000 (0:00:00.289) 0:00:06.963 ********** 2025-05-17 01:00:48.732904 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-17 01:00:48.732914 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-17 01:00:48.732924 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-17 01:00:48.732933 | orchestrator | 2025-05-17 01:00:48.732943 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-17 01:00:48.732953 | orchestrator | Saturday 17 May 2025 00:58:46 +0000 (0:00:00.660) 0:00:07.623 ********** 2025-05-17 01:00:48.732962 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:00:48.732972 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:00:48.732981 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:00:48.732991 | orchestrator | 2025-05-17 01:00:48.733001 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-17 01:00:48.733010 | orchestrator | Saturday 17 May 2025 00:58:47 +0000 (0:00:00.621) 0:00:08.245 ********** 2025-05-17 01:00:48.733033 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-17 01:00:48.733043 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-17 01:00:48.733053 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-17 01:00:48.733062 | orchestrator | 2025-05-17 01:00:48.733072 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-17 01:00:48.733082 | orchestrator | Saturday 17 May 2025 00:58:49 +0000 (0:00:02.365) 0:00:10.610 ********** 2025-05-17 01:00:48.733091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-17 01:00:48.733101 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-17 01:00:48.733110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-17 01:00:48.733120 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.733130 | orchestrator | 2025-05-17 01:00:48.733146 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-17 01:00:48.733156 | orchestrator | Saturday 17 May 2025 00:58:49 +0000 (0:00:00.421) 0:00:11.031 ********** 2025-05-17 01:00:48.733167 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-17 01:00:48.733180 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-17 01:00:48.733190 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-17 01:00:48.733200 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.733210 | orchestrator | 2025-05-17 01:00:48.733219 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-17 01:00:48.733229 | orchestrator | Saturday 17 May 2025 00:58:50 +0000 (0:00:00.635) 0:00:11.667 ********** 2025-05-17 01:00:48.733241 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-17 01:00:48.733259 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-17 01:00:48.733270 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-17 01:00:48.733280 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.733290 | orchestrator | 2025-05-17 01:00:48.733299 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-17 01:00:48.733309 | orchestrator | Saturday 17 May 2025 00:58:50 +0000 (0:00:00.154) 0:00:11.822 ********** 2025-05-17 01:00:48.733321 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '312390d96c3a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-17 00:58:48.086054', 'end': '2025-05-17 00:58:48.124641', 'delta': '0:00:00.038587', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['312390d96c3a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-17 01:00:48.733348 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '917e49e1246c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-17 00:58:48.667459', 'end': '2025-05-17 00:58:48.706756', 'delta': '0:00:00.039297', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['917e49e1246c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-17 01:00:48.733364 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '7d3fa5b9497f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-17 00:58:49.207373', 'end': '2025-05-17 00:58:49.247312', 'delta': '0:00:00.039939', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7d3fa5b9497f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-17 01:00:48.733374 | orchestrator | 2025-05-17 01:00:48.733384 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-17 01:00:48.733394 | orchestrator | Saturday 17 May 2025 00:58:50 +0000 (0:00:00.184) 0:00:12.006 ********** 2025-05-17 01:00:48.733403 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:00:48.733414 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:00:48.733423 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:00:48.733433 | orchestrator | 2025-05-17 01:00:48.733442 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-17 01:00:48.733458 | orchestrator | Saturday 17 May 2025 00:58:51 +0000 (0:00:00.425) 0:00:12.432 ********** 2025-05-17 01:00:48.733467 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-17 01:00:48.733477 | orchestrator | 2025-05-17 01:00:48.733486 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-17 01:00:48.733495 | orchestrator | Saturday 17 May 2025 00:58:52 +0000 (0:00:01.313) 0:00:13.746 ********** 2025-05-17 01:00:48.733520 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.733530 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.733540 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.733549 | orchestrator | 2025-05-17 01:00:48.733559 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-17 01:00:48.733568 | orchestrator | Saturday 17 May 2025 00:58:53 +0000 (0:00:00.617) 0:00:14.363 ********** 2025-05-17 01:00:48.733578 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.733587 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.733597 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.733606 | orchestrator | 2025-05-17 01:00:48.733616 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-17 01:00:48.733626 | orchestrator | Saturday 17 May 2025 00:58:53 +0000 (0:00:00.496) 0:00:14.859 ********** 2025-05-17 01:00:48.733635 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.733645 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.733655 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.733665 | orchestrator | 2025-05-17 01:00:48.733674 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-17 01:00:48.733684 | orchestrator | Saturday 17 May 2025 00:58:54 +0000 (0:00:00.309) 0:00:15.169 ********** 2025-05-17 01:00:48.733693 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:00:48.733703 | orchestrator | 2025-05-17 01:00:48.733713 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-17 01:00:48.733722 | orchestrator | Saturday 17 May 2025 00:58:54 +0000 (0:00:00.122) 0:00:15.292 ********** 2025-05-17 01:00:48.733731 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.733741 | orchestrator | 2025-05-17 01:00:48.733750 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-17 01:00:48.733760 | orchestrator | Saturday 17 May 2025 00:58:54 +0000 (0:00:00.235) 0:00:15.528 ********** 2025-05-17 01:00:48.733770 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.733779 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.733789 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.733798 | orchestrator | 2025-05-17 01:00:48.733808 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-17 01:00:48.733817 | orchestrator | Saturday 17 May 2025 00:58:54 +0000 (0:00:00.505) 0:00:16.033 ********** 2025-05-17 01:00:48.733827 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.733836 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.733846 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.733855 | orchestrator | 2025-05-17 01:00:48.733929 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-17 01:00:48.733951 | orchestrator | Saturday 17 May 2025 00:58:55 +0000 (0:00:00.358) 0:00:16.392 ********** 2025-05-17 01:00:48.733968 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.733984 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.734000 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.734090 | orchestrator | 2025-05-17 01:00:48.734117 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-17 01:00:48.734135 | orchestrator | Saturday 17 May 2025 00:58:55 +0000 (0:00:00.370) 0:00:16.762 ********** 2025-05-17 01:00:48.734153 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.734170 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.734201 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.734219 | orchestrator | 2025-05-17 01:00:48.734236 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-17 01:00:48.734259 | orchestrator | Saturday 17 May 2025 00:58:56 +0000 (0:00:00.369) 0:00:17.132 ********** 2025-05-17 01:00:48.734269 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.734279 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.734288 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.734298 | orchestrator | 2025-05-17 01:00:48.734308 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-17 01:00:48.734317 | orchestrator | Saturday 17 May 2025 00:58:56 +0000 (0:00:00.634) 0:00:17.766 ********** 2025-05-17 01:00:48.734326 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.734336 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.734346 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.734355 | orchestrator | 2025-05-17 01:00:48.734365 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-17 01:00:48.734380 | orchestrator | Saturday 17 May 2025 00:58:57 +0000 (0:00:00.344) 0:00:18.111 ********** 2025-05-17 01:00:48.734390 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.734400 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.734410 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.734419 | orchestrator | 2025-05-17 01:00:48.734429 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-17 01:00:48.734438 | orchestrator | Saturday 17 May 2025 00:58:57 +0000 (0:00:00.333) 0:00:18.445 ********** 2025-05-17 01:00:48.734450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7dd92559--5dfb--56e9--86ff--64c31a268c5e-osd--block--7dd92559--5dfb--56e9--86ff--64c31a268c5e', 'dm-uuid-LVM-ICpBvTgj5dTnFvOdfSeM1M1wzBfOATCHauj4UssTQYCoB2gyofatqm3DvgKoeSc2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--25c991a6--e724--5c1a--b659--154410c60242-osd--block--25c991a6--e724--5c1a--b659--154410c60242', 'dm-uuid-LVM-m2NNGHf8cIhV8Uzscf4kYJllmq0D4CK4ATzekxxRpUCvB6RkiASYM12j2j322eRc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93bb0954--6685--5c67--a7e0--a3574f092206-osd--block--93bb0954--6685--5c67--a7e0--a3574f092206', 'dm-uuid-LVM-l2FeIeeIowH2T17wpx4XON9TwpscsTNGQ0MS1sG7tqDgRBCXmVTgFxydy1EGckCJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e21dde7b--e402--5316--8511--fd8df0cc7e38-osd--block--e21dde7b--e402--5316--8511--fd8df0cc7e38', 'dm-uuid-LVM-lsII9KsMdOVEhAs5jK6Hm4bWNIOUR5HdDSBLzkrclZplpraImsUWrgTbKIyO0WGQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a83a275b--240b--53eb--892d--9c3e23ab252d-osd--block--a83a275b--240b--53eb--892d--9c3e23ab252d', 'dm-uuid-LVM-SL0sNJNvgH2gWOvAMJwT3AsxRffsOCqYxObuOld4qGl6gElqmBsvB6aFehhG1EIH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b4d5f2e3--0e32--57e8--8b55--58d04db15593-osd--block--b4d5f2e3--0e32--57e8--8b55--58d04db15593', 'dm-uuid-LVM-OGMCMSTb5TfUSHUlwndgAQn048uH9N59BO7PfFYYUbEkwMCqDgYXeKdwXHp0Xvdy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c', 'scsi-SQEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c-part1', 'scsi-SQEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c-part14', 'scsi-SQEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c-part15', 'scsi-SQEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c-part16', 'scsi-SQEMU_QEMU_HARDDISK_8f19b7c7-8ad2-4322-8bec-185edfc09a4c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 01:00:48.734802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7dd92559--5dfb--56e9--86ff--64c31a268c5e-osd--block--7dd92559--5dfb--56e9--86ff--64c31a268c5e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JohAZ7-Yatq-n8jn-9Jsy-X2fm-6zew-06IpCs', 'scsi-0QEMU_QEMU_HARDDISK_4c541808-fecb-473a-bfa6-e6107b1a17c0', 'scsi-SQEMU_QEMU_HARDDISK_4c541808-fecb-473a-bfa6-e6107b1a17c0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 01:00:48.734844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734860 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--25c991a6--e724--5c1a--b659--154410c60242-osd--block--25c991a6--e724--5c1a--b659--154410c60242'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C2G7Nt-gTCt-aKFX-QQcl-CDKe-yNdg-Yx1MWs', 'scsi-0QEMU_QEMU_HARDDISK_0e5716a4-9f06-4595-a8e5-44869be2d3e3', 'scsi-SQEMU_QEMU_HARDDISK_0e5716a4-9f06-4595-a8e5-44869be2d3e3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 01:00:48.734913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6120ef73-2521-4d83-8ac9-34a2289f978b', 'scsi-SQEMU_QEMU_HARDDISK_6120ef73-2521-4d83-8ac9-34a2289f978b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 01:00:48.734950 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.734971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-17-00-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 01:00:48.734996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee', 'scsi-SQEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee-part1', 'scsi-SQEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee-part14', 'scsi-SQEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5429789d-885a-4f80-a72025-05-17 01:00:48 | INFO  | Task 8b823225-32f0-4b03-aae4-89de7eafad76 is in state SUCCESS 2025-05-17 01:00:48.735012 | orchestrator | 1f-930b52b349ee-part15', 'scsi-SQEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee-part16', 'scsi-SQEMU_QEMU_HARDDISK_5429789d-885a-4f80-a71f-930b52b349ee-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 01:00:48.735024 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.735034 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:00:48.735044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--93bb0954--6685--5c67--a7e0--a3574f092206-osd--block--93bb0954--6685--5c67--a7e0--a3574f092206'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-p0RudH-CC8C-EJgG-w2Ga-LCgf-BcId-roxDFR', 'scsi-0QEMU_QEMU_HARDDISK_6fc6848d-5127-4f65-b412-e829995e25e7', 'scsi-SQEMU_QEMU_HARDDISK_6fc6848d-5127-4f65-b412-e829995e25e7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 01:00:48.735063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50', 'scsi-SQEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50-part1', 'scsi-SQEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50-part14', 'scsi-SQEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50-part15', 'scsi-SQEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50-part16', 'scsi-SQEMU_QEMU_HARDDISK_0216f665-ca85-43be-85f8-4def2235ea50-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 01:00:48.735086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e21dde7b--e402--5316--8511--fd8df0cc7e38-osd--block--e21dde7b--e402--5316--8511--fd8df0cc7e38'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dLQXNw-Fjla-neAB-ksBp-dcxc-Nukq-DBHkj4', 'scsi-0QEMU_QEMU_HARDDISK_e3068b10-d912-449c-8868-8ffe0bc578f0', 'scsi-SQEMU_QEMU_HARDDISK_e3068b10-d912-449c-8868-8ffe0bc578f0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 01:00:48.735097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a83a275b--240b--53eb--892d--9c3e23ab252d-osd--block--a83a275b--240b--53eb--892d--9c3e23ab252d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nWZ2GN-50zj-EzOk-p08k-P19H-WgDK-kK662v', 'scsi-0QEMU_QEMU_HARDDISK_4ddb2821-e209-41e3-b031-9f23c5adf4cf', 'scsi-SQEMU_QEMU_HARDDISK_4ddb2821-e209-41e3-b031-9f23c5adf4cf'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 01:00:48.735107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bec56d32-b1fb-48c0-a20f-a6daa2f9686d', 'scsi-SQEMU_QEMU_HARDDISK_bec56d32-b1fb-48c0-a20f-a6daa2f9686d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 01:00:48.735123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b4d5f2e3--0e32--57e8--8b55--58d04db15593-osd--block--b4d5f2e3--0e32--57e8--8b55--58d04db15593'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gH6WB7-T494-Hq6L-G3g0-c5yg-j2Ve-S1f2Hy', 'scsi-0QEMU_QEMU_HARDDISK_8746963d-35d6-4275-a53f-fa471798b09a', 'scsi-SQEMU_QEMU_HARDDISK_8746963d-35d6-4275-a53f-fa471798b09a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 01:00:48.735133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-17-00-02-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 01:00:48.735143 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.735160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c9243530-1d89-4c38-b4ef-a9d7ed453cca', 'scsi-SQEMU_QEMU_HARDDISK_c9243530-1d89-4c38-b4ef-a9d7ed453cca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 01:00:48.735175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-17-00-02-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 01:00:48.735185 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.735195 | orchestrator | 2025-05-17 01:00:48.735205 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-17 01:00:48.735215 | orchestrator | Saturday 17 May 2025 00:58:58 +0000 (0:00:00.659) 0:00:19.105 ********** 2025-05-17 01:00:48.735225 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-17 01:00:48.735235 | orchestrator | 2025-05-17 01:00:48.735244 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-17 01:00:48.735261 | orchestrator | Saturday 17 May 2025 00:58:59 +0000 (0:00:01.487) 0:00:20.592 ********** 2025-05-17 01:00:48.735277 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:00:48.735293 | orchestrator | 2025-05-17 01:00:48.735309 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-17 01:00:48.735324 | orchestrator | Saturday 17 May 2025 00:58:59 +0000 (0:00:00.172) 0:00:20.765 ********** 2025-05-17 01:00:48.735411 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:00:48.735429 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:00:48.735443 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:00:48.735453 | orchestrator | 2025-05-17 01:00:48.735463 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-17 01:00:48.735484 | orchestrator | Saturday 17 May 2025 00:59:00 +0000 (0:00:00.374) 0:00:21.140 ********** 2025-05-17 01:00:48.735494 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:00:48.735503 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:00:48.735513 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:00:48.735522 | orchestrator | 2025-05-17 01:00:48.735532 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-17 01:00:48.735541 | orchestrator | Saturday 17 May 2025 00:59:00 +0000 (0:00:00.686) 0:00:21.827 ********** 2025-05-17 01:00:48.735551 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:00:48.735560 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:00:48.735585 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:00:48.735596 | orchestrator | 2025-05-17 01:00:48.735605 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-17 01:00:48.735615 | orchestrator | Saturday 17 May 2025 00:59:01 +0000 (0:00:00.287) 0:00:22.114 ********** 2025-05-17 01:00:48.735624 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:00:48.735634 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:00:48.735643 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:00:48.735653 | orchestrator | 2025-05-17 01:00:48.735662 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-17 01:00:48.735672 | orchestrator | Saturday 17 May 2025 00:59:01 +0000 (0:00:00.874) 0:00:22.988 ********** 2025-05-17 01:00:48.735681 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.735691 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.735701 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.735710 | orchestrator | 2025-05-17 01:00:48.735720 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-17 01:00:48.735729 | orchestrator | Saturday 17 May 2025 00:59:02 +0000 (0:00:00.333) 0:00:23.322 ********** 2025-05-17 01:00:48.735739 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.735748 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.735758 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.735767 | orchestrator | 2025-05-17 01:00:48.735777 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-17 01:00:48.735786 | orchestrator | Saturday 17 May 2025 00:59:02 +0000 (0:00:00.436) 0:00:23.758 ********** 2025-05-17 01:00:48.735796 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.735806 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.735815 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.735825 | orchestrator | 2025-05-17 01:00:48.735834 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-17 01:00:48.735844 | orchestrator | Saturday 17 May 2025 00:59:03 +0000 (0:00:00.324) 0:00:24.083 ********** 2025-05-17 01:00:48.735853 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-17 01:00:48.735863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-17 01:00:48.735936 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-17 01:00:48.735947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-17 01:00:48.735957 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.735966 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-17 01:00:48.735976 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-17 01:00:48.735986 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-17 01:00:48.735995 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.736005 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-17 01:00:48.736015 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-17 01:00:48.736034 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.736044 | orchestrator | 2025-05-17 01:00:48.736054 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-17 01:00:48.736064 | orchestrator | Saturday 17 May 2025 00:59:04 +0000 (0:00:01.162) 0:00:25.245 ********** 2025-05-17 01:00:48.736094 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-17 01:00:48.736104 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-17 01:00:48.736114 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-17 01:00:48.736123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-17 01:00:48.736133 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.736142 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-17 01:00:48.736152 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-17 01:00:48.736161 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-17 01:00:48.736177 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-17 01:00:48.736186 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.736196 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-17 01:00:48.736206 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.736215 | orchestrator | 2025-05-17 01:00:48.736225 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-17 01:00:48.736234 | orchestrator | Saturday 17 May 2025 00:59:05 +0000 (0:00:00.827) 0:00:26.072 ********** 2025-05-17 01:00:48.736244 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-17 01:00:48.736254 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-17 01:00:48.736263 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-17 01:00:48.736272 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-17 01:00:48.736282 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-17 01:00:48.736291 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-17 01:00:48.736301 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-17 01:00:48.736310 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-17 01:00:48.736319 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-17 01:00:48.736329 | orchestrator | 2025-05-17 01:00:48.736338 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-17 01:00:48.736348 | orchestrator | Saturday 17 May 2025 00:59:06 +0000 (0:00:01.816) 0:00:27.888 ********** 2025-05-17 01:00:48.736358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-17 01:00:48.736367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-17 01:00:48.736376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-17 01:00:48.736386 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-17 01:00:48.736395 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-17 01:00:48.736404 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-17 01:00:48.736414 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.736424 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.736433 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-17 01:00:48.736443 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-17 01:00:48.736452 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-17 01:00:48.736459 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.736467 | orchestrator | 2025-05-17 01:00:48.736475 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-17 01:00:48.736483 | orchestrator | Saturday 17 May 2025 00:59:07 +0000 (0:00:00.663) 0:00:28.552 ********** 2025-05-17 01:00:48.736491 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-17 01:00:48.736499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-17 01:00:48.736506 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-17 01:00:48.736514 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-17 01:00:48.736522 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-17 01:00:48.736530 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-17 01:00:48.736542 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.736550 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.736558 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-17 01:00:48.736566 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-17 01:00:48.736574 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-17 01:00:48.736581 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.736589 | orchestrator | 2025-05-17 01:00:48.736597 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-17 01:00:48.736605 | orchestrator | Saturday 17 May 2025 00:59:07 +0000 (0:00:00.415) 0:00:28.967 ********** 2025-05-17 01:00:48.736613 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-17 01:00:48.736621 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-17 01:00:48.736629 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-17 01:00:48.736636 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-17 01:00:48.736644 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-17 01:00:48.736652 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-17 01:00:48.736660 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.736673 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.736693 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-17 01:00:48.736702 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-17 01:00:48.736709 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-17 01:00:48.736717 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.736736 | orchestrator | 2025-05-17 01:00:48.736744 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-17 01:00:48.736762 | orchestrator | Saturday 17 May 2025 00:59:08 +0000 (0:00:00.456) 0:00:29.424 ********** 2025-05-17 01:00:48.736771 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 01:00:48.736779 | orchestrator | 2025-05-17 01:00:48.736790 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-17 01:00:48.736811 | orchestrator | Saturday 17 May 2025 00:59:09 +0000 (0:00:00.744) 0:00:30.169 ********** 2025-05-17 01:00:48.736819 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.736827 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.736835 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.736843 | orchestrator | 2025-05-17 01:00:48.736851 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-17 01:00:48.736952 | orchestrator | Saturday 17 May 2025 00:59:09 +0000 (0:00:00.458) 0:00:30.628 ********** 2025-05-17 01:00:48.736960 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.736980 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.736989 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.736997 | orchestrator | 2025-05-17 01:00:48.737004 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-17 01:00:48.737023 | orchestrator | Saturday 17 May 2025 00:59:09 +0000 (0:00:00.350) 0:00:30.978 ********** 2025-05-17 01:00:48.737032 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.737039 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.737047 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.737067 | orchestrator | 2025-05-17 01:00:48.737086 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-17 01:00:48.737101 | orchestrator | Saturday 17 May 2025 00:59:10 +0000 (0:00:00.330) 0:00:31.308 ********** 2025-05-17 01:00:48.737109 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:00:48.737117 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:00:48.737137 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:00:48.737146 | orchestrator | 2025-05-17 01:00:48.737164 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-17 01:00:48.737172 | orchestrator | Saturday 17 May 2025 00:59:11 +0000 (0:00:00.778) 0:00:32.087 ********** 2025-05-17 01:00:48.737180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 01:00:48.737188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 01:00:48.737196 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 01:00:48.737203 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.737241 | orchestrator | 2025-05-17 01:00:48.737249 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-17 01:00:48.737267 | orchestrator | Saturday 17 May 2025 00:59:11 +0000 (0:00:00.389) 0:00:32.477 ********** 2025-05-17 01:00:48.737276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 01:00:48.737293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 01:00:48.737301 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 01:00:48.737309 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.737317 | orchestrator | 2025-05-17 01:00:48.737325 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-17 01:00:48.737345 | orchestrator | Saturday 17 May 2025 00:59:11 +0000 (0:00:00.473) 0:00:32.950 ********** 2025-05-17 01:00:48.737363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 01:00:48.737371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 01:00:48.737379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 01:00:48.737387 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.737406 | orchestrator | 2025-05-17 01:00:48.737414 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 01:00:48.737432 | orchestrator | Saturday 17 May 2025 00:59:12 +0000 (0:00:00.452) 0:00:33.402 ********** 2025-05-17 01:00:48.737440 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:00:48.737449 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:00:48.737456 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:00:48.737476 | orchestrator | 2025-05-17 01:00:48.737497 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-17 01:00:48.737505 | orchestrator | Saturday 17 May 2025 00:59:12 +0000 (0:00:00.446) 0:00:33.849 ********** 2025-05-17 01:00:48.737513 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-17 01:00:48.737532 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-17 01:00:48.737551 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-17 01:00:48.737559 | orchestrator | 2025-05-17 01:00:48.737567 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-17 01:00:48.737586 | orchestrator | Saturday 17 May 2025 00:59:13 +0000 (0:00:00.818) 0:00:34.668 ********** 2025-05-17 01:00:48.737594 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.737612 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.737621 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.737628 | orchestrator | 2025-05-17 01:00:48.737636 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-17 01:00:48.737658 | orchestrator | Saturday 17 May 2025 00:59:14 +0000 (0:00:00.415) 0:00:35.084 ********** 2025-05-17 01:00:48.737677 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.737685 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.737693 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.737700 | orchestrator | 2025-05-17 01:00:48.737713 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-17 01:00:48.737733 | orchestrator | Saturday 17 May 2025 00:59:14 +0000 (0:00:00.306) 0:00:35.390 ********** 2025-05-17 01:00:48.737838 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-17 01:00:48.737847 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.737906 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-17 01:00:48.737920 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.737933 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-17 01:00:48.737946 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.737956 | orchestrator | 2025-05-17 01:00:48.737964 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-17 01:00:48.737971 | orchestrator | Saturday 17 May 2025 00:59:14 +0000 (0:00:00.446) 0:00:35.837 ********** 2025-05-17 01:00:48.737983 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-17 01:00:48.737992 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.738000 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-17 01:00:48.738007 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.738045 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-17 01:00:48.738055 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.738063 | orchestrator | 2025-05-17 01:00:48.738071 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-17 01:00:48.738078 | orchestrator | Saturday 17 May 2025 00:59:15 +0000 (0:00:00.234) 0:00:36.071 ********** 2025-05-17 01:00:48.738086 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-17 01:00:48.738094 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-17 01:00:48.738102 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-17 01:00:48.738110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-17 01:00:48.738117 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.738125 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-17 01:00:48.738133 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-17 01:00:48.738141 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-17 01:00:48.738149 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.738156 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-17 01:00:48.738164 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-17 01:00:48.738172 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.738180 | orchestrator | 2025-05-17 01:00:48.738188 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-17 01:00:48.738196 | orchestrator | Saturday 17 May 2025 00:59:15 +0000 (0:00:00.721) 0:00:36.793 ********** 2025-05-17 01:00:48.738204 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.738211 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.738219 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:00:48.738227 | orchestrator | 2025-05-17 01:00:48.738235 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-17 01:00:48.738242 | orchestrator | Saturday 17 May 2025 00:59:16 +0000 (0:00:00.273) 0:00:37.067 ********** 2025-05-17 01:00:48.738250 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-17 01:00:48.738258 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-17 01:00:48.738266 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-17 01:00:48.738273 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-17 01:00:48.738281 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-17 01:00:48.738289 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-17 01:00:48.738303 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-17 01:00:48.738311 | orchestrator | 2025-05-17 01:00:48.738319 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-17 01:00:48.738326 | orchestrator | Saturday 17 May 2025 00:59:16 +0000 (0:00:00.885) 0:00:37.952 ********** 2025-05-17 01:00:48.738334 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-17 01:00:48.738342 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-17 01:00:48.738350 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-17 01:00:48.738357 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-17 01:00:48.738365 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-17 01:00:48.738373 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-17 01:00:48.738380 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-17 01:00:48.738388 | orchestrator | 2025-05-17 01:00:48.738396 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-17 01:00:48.738404 | orchestrator | Saturday 17 May 2025 00:59:18 +0000 (0:00:01.564) 0:00:39.516 ********** 2025-05-17 01:00:48.738412 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:00:48.738420 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:00:48.738428 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-17 01:00:48.738436 | orchestrator | 2025-05-17 01:00:48.738450 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-17 01:00:48.738458 | orchestrator | Saturday 17 May 2025 00:59:18 +0000 (0:00:00.471) 0:00:39.988 ********** 2025-05-17 01:00:48.738468 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-17 01:00:48.738482 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-17 01:00:48.738490 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-17 01:00:48.738498 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-17 01:00:48.738507 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-17 01:00:48.738514 | orchestrator | 2025-05-17 01:00:48.738522 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-17 01:00:48.738530 | orchestrator | Saturday 17 May 2025 00:59:58 +0000 (0:00:39.756) 0:01:19.744 ********** 2025-05-17 01:00:48.738538 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738545 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738561 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738568 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738576 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738584 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738592 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-17 01:00:48.738600 | orchestrator | 2025-05-17 01:00:48.738607 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-17 01:00:48.738615 | orchestrator | Saturday 17 May 2025 01:00:20 +0000 (0:00:21.732) 0:01:41.477 ********** 2025-05-17 01:00:48.738623 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738630 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738638 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738646 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738692 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738706 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738720 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-17 01:00:48.738733 | orchestrator | 2025-05-17 01:00:48.738746 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-17 01:00:48.738760 | orchestrator | Saturday 17 May 2025 01:00:30 +0000 (0:00:10.043) 0:01:51.521 ********** 2025-05-17 01:00:48.738773 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738782 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-17 01:00:48.738802 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-17 01:00:48.738810 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738818 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-17 01:00:48.738826 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-17 01:00:48.738834 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738841 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-17 01:00:48.738849 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-17 01:00:48.738864 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738923 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-17 01:00:48.738932 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-17 01:00:48.738940 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738948 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-17 01:00:48.738956 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-17 01:00:48.738963 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-17 01:00:48.738971 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-17 01:00:48.738982 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-17 01:00:48.738988 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-17 01:00:48.738995 | orchestrator | 2025-05-17 01:00:48.739002 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:00:48.739008 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-17 01:00:48.739023 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-05-17 01:00:48.739030 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-05-17 01:00:48.739036 | orchestrator | 2025-05-17 01:00:48.739043 | orchestrator | 2025-05-17 01:00:48.739050 | orchestrator | 2025-05-17 01:00:48.739056 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:00:48.739063 | orchestrator | Saturday 17 May 2025 01:00:48 +0000 (0:00:17.684) 0:02:09.205 ********** 2025-05-17 01:00:48.739069 | orchestrator | =============================================================================== 2025-05-17 01:00:48.739076 | orchestrator | create openstack pool(s) ----------------------------------------------- 39.76s 2025-05-17 01:00:48.739082 | orchestrator | generate keys ---------------------------------------------------------- 21.73s 2025-05-17 01:00:48.739089 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.68s 2025-05-17 01:00:48.739095 | orchestrator | get keys from monitors ------------------------------------------------- 10.04s 2025-05-17 01:00:48.739102 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.37s 2025-05-17 01:00:48.739108 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.82s 2025-05-17 01:00:48.739115 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.56s 2025-05-17 01:00:48.739121 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.49s 2025-05-17 01:00:48.739128 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.31s 2025-05-17 01:00:48.739134 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.16s 2025-05-17 01:00:48.739141 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.89s 2025-05-17 01:00:48.739147 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.88s 2025-05-17 01:00:48.739154 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.87s 2025-05-17 01:00:48.739160 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.83s 2025-05-17 01:00:48.739167 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.82s 2025-05-17 01:00:48.739174 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 0.82s 2025-05-17 01:00:48.739180 | orchestrator | ceph-facts : set_fact _radosgw_address to radosgw_address --------------- 0.78s 2025-05-17 01:00:48.739187 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.78s 2025-05-17 01:00:48.739193 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.74s 2025-05-17 01:00:48.739200 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 0.72s 2025-05-17 01:00:48.739206 | orchestrator | 2025-05-17 01:00:48 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:48.739213 | orchestrator | 2025-05-17 01:00:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:51.779135 | orchestrator | 2025-05-17 01:00:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:51.780766 | orchestrator | 2025-05-17 01:00:51 | INFO  | Task cb5f40de-0cfe-4045-b6a9-6a9d563508f8 is in state STARTED 2025-05-17 01:00:51.782117 | orchestrator | 2025-05-17 01:00:51 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:51.782144 | orchestrator | 2025-05-17 01:00:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:54.829352 | orchestrator | 2025-05-17 01:00:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:54.830118 | orchestrator | 2025-05-17 01:00:54 | INFO  | Task cb5f40de-0cfe-4045-b6a9-6a9d563508f8 is in state STARTED 2025-05-17 01:00:54.831412 | orchestrator | 2025-05-17 01:00:54 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:54.831438 | orchestrator | 2025-05-17 01:00:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:00:57.879529 | orchestrator | 2025-05-17 01:00:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:00:57.880934 | orchestrator | 2025-05-17 01:00:57 | INFO  | Task cb5f40de-0cfe-4045-b6a9-6a9d563508f8 is in state STARTED 2025-05-17 01:00:57.884024 | orchestrator | 2025-05-17 01:00:57 | INFO  | Task a80bc4bc-27aa-44b4-b7de-53415729df85 is in state STARTED 2025-05-17 01:00:57.886150 | orchestrator | 2025-05-17 01:00:57 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:00:57.887420 | orchestrator | 2025-05-17 01:00:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:00.939750 | orchestrator | 2025-05-17 01:01:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:00.941976 | orchestrator | 2025-05-17 01:01:00 | INFO  | Task cb5f40de-0cfe-4045-b6a9-6a9d563508f8 is in state STARTED 2025-05-17 01:01:00.944052 | orchestrator | 2025-05-17 01:01:00 | INFO  | Task a80bc4bc-27aa-44b4-b7de-53415729df85 is in state STARTED 2025-05-17 01:01:00.945591 | orchestrator | 2025-05-17 01:01:00 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:01:00.945633 | orchestrator | 2025-05-17 01:01:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:03.989900 | orchestrator | 2025-05-17 01:01:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:03.991360 | orchestrator | 2025-05-17 01:01:03 | INFO  | Task cb5f40de-0cfe-4045-b6a9-6a9d563508f8 is in state STARTED 2025-05-17 01:01:03.992983 | orchestrator | 2025-05-17 01:01:03 | INFO  | Task a80bc4bc-27aa-44b4-b7de-53415729df85 is in state STARTED 2025-05-17 01:01:03.995026 | orchestrator | 2025-05-17 01:01:03 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:01:03.995336 | orchestrator | 2025-05-17 01:01:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:07.045533 | orchestrator | 2025-05-17 01:01:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:07.046226 | orchestrator | 2025-05-17 01:01:07 | INFO  | Task cb5f40de-0cfe-4045-b6a9-6a9d563508f8 is in state STARTED 2025-05-17 01:01:07.048389 | orchestrator | 2025-05-17 01:01:07 | INFO  | Task a80bc4bc-27aa-44b4-b7de-53415729df85 is in state STARTED 2025-05-17 01:01:07.052034 | orchestrator | 2025-05-17 01:01:07 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:01:07.052082 | orchestrator | 2025-05-17 01:01:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:10.093105 | orchestrator | 2025-05-17 01:01:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:10.093337 | orchestrator | 2025-05-17 01:01:10 | INFO  | Task cb5f40de-0cfe-4045-b6a9-6a9d563508f8 is in state STARTED 2025-05-17 01:01:10.094384 | orchestrator | 2025-05-17 01:01:10 | INFO  | Task a80bc4bc-27aa-44b4-b7de-53415729df85 is in state STARTED 2025-05-17 01:01:10.095020 | orchestrator | 2025-05-17 01:01:10 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state STARTED 2025-05-17 01:01:10.095056 | orchestrator | 2025-05-17 01:01:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:13.134012 | orchestrator | 2025-05-17 01:01:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:13.134583 | orchestrator | 2025-05-17 01:01:13 | INFO  | Task cb5f40de-0cfe-4045-b6a9-6a9d563508f8 is in state STARTED 2025-05-17 01:01:13.135536 | orchestrator | 2025-05-17 01:01:13 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:01:13.136508 | orchestrator | 2025-05-17 01:01:13 | INFO  | Task a80bc4bc-27aa-44b4-b7de-53415729df85 is in state STARTED 2025-05-17 01:01:13.137315 | orchestrator | 2025-05-17 01:01:13 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:01:13.138323 | orchestrator | 2025-05-17 01:01:13 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state STARTED 2025-05-17 01:01:13.140268 | orchestrator | 2025-05-17 01:01:13 | INFO  | Task 12713ea6-e89c-47ae-8314-47c2ad9a6ca8 is in state SUCCESS 2025-05-17 01:01:13.142945 | orchestrator | 2025-05-17 01:01:13.142989 | orchestrator | 2025-05-17 01:01:13.143002 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 01:01:13.143014 | orchestrator | 2025-05-17 01:01:13.143025 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 01:01:13.143036 | orchestrator | Saturday 17 May 2025 00:58:36 +0000 (0:00:00.296) 0:00:00.296 ********** 2025-05-17 01:01:13.143048 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:13.143062 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:01:13.143074 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:01:13.143085 | orchestrator | 2025-05-17 01:01:13.143097 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 01:01:13.143108 | orchestrator | Saturday 17 May 2025 00:58:37 +0000 (0:00:00.387) 0:00:00.683 ********** 2025-05-17 01:01:13.143119 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-17 01:01:13.143130 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-17 01:01:13.143141 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-17 01:01:13.143152 | orchestrator | 2025-05-17 01:01:13.143163 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-17 01:01:13.143173 | orchestrator | 2025-05-17 01:01:13.143184 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-17 01:01:13.143204 | orchestrator | Saturday 17 May 2025 00:58:37 +0000 (0:00:00.300) 0:00:00.984 ********** 2025-05-17 01:01:13.143215 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:01:13.143227 | orchestrator | 2025-05-17 01:01:13.143238 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-17 01:01:13.143249 | orchestrator | Saturday 17 May 2025 00:58:38 +0000 (0:00:00.787) 0:00:01.772 ********** 2025-05-17 01:01:13.143265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.143283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.143381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.143399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-17 01:01:13.143418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-17 01:01:13.143431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-17 01:01:13.143443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.143463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.143475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.143489 | orchestrator | 2025-05-17 01:01:13.143502 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-17 01:01:13.143520 | orchestrator | Saturday 17 May 2025 00:58:40 +0000 (0:00:02.463) 0:00:04.236 ********** 2025-05-17 01:01:13.143534 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-17 01:01:13.143547 | orchestrator | 2025-05-17 01:01:13.143561 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-17 01:01:13.143574 | orchestrator | Saturday 17 May 2025 00:58:41 +0000 (0:00:00.524) 0:00:04.760 ********** 2025-05-17 01:01:13.143586 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:13.143600 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:01:13.143612 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:01:13.143626 | orchestrator | 2025-05-17 01:01:13.143637 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-17 01:01:13.143648 | orchestrator | Saturday 17 May 2025 00:58:41 +0000 (0:00:00.443) 0:00:05.204 ********** 2025-05-17 01:01:13.143659 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-17 01:01:13.143671 | orchestrator | 2025-05-17 01:01:13.143682 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-17 01:01:13.143693 | orchestrator | Saturday 17 May 2025 00:58:42 +0000 (0:00:00.410) 0:00:05.614 ********** 2025-05-17 01:01:13.143708 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:01:13.143719 | orchestrator | 2025-05-17 01:01:13.143731 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-17 01:01:13.143742 | orchestrator | Saturday 17 May 2025 00:58:42 +0000 (0:00:00.650) 0:00:06.265 ********** 2025-05-17 01:01:13.143753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.143776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.143797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.143810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-17 01:01:13.143826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-17 01:01:13.143838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-17 01:01:13.143893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.143908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.143920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.143931 | orchestrator | 2025-05-17 01:01:13.143942 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-17 01:01:13.143953 | orchestrator | Saturday 17 May 2025 00:58:46 +0000 (0:00:03.228) 0:00:09.493 ********** 2025-05-17 01:01:13.143983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-17 01:01:13.143996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 01:01:13.144015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-17 01:01:13.144026 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:13.144038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-17 01:01:13.144050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 01:01:13.144071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-17 01:01:13.144083 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:01:13.144100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-17 01:01:13.144118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 01:01:13.144130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-17 01:01:13.144142 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:01:13.144153 | orchestrator | 2025-05-17 01:01:13.144164 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-17 01:01:13.144175 | orchestrator | Saturday 17 May 2025 00:58:47 +0000 (0:00:00.971) 0:00:10.465 ********** 2025-05-17 01:01:13.144186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-17 01:01:13.144205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 01:01:13.144222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-17 01:01:13.144242 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:13.144254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-17 01:01:13.144266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 01:01:13.144278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-17 01:01:13.144289 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:01:13.144309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-17 01:01:13.144334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 01:01:13.144346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-17 01:01:13.144357 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:01:13.144368 | orchestrator | 2025-05-17 01:01:13.144379 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-17 01:01:13.144390 | orchestrator | Saturday 17 May 2025 00:58:48 +0000 (0:00:01.220) 0:00:11.686 ********** 2025-05-17 01:01:13.144402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.144414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.144438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.144459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-17 01:01:13.144470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-17 01:01:13.144483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-17 01:01:13.144503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.144523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.144557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.144601 | orchestrator | 2025-05-17 01:01:13.144621 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-17 01:01:13.144639 | orchestrator | Saturday 17 May 2025 00:58:51 +0000 (0:00:03.208) 0:00:14.895 ********** 2025-05-17 01:01:13.144668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.144689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 01:01:13.144709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.144730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 01:01:13.144781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.144803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 01:01:13.144824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.144839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.144850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.144921 | orchestrator | 2025-05-17 01:01:13.144933 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-17 01:01:13.144944 | orchestrator | Saturday 17 May 2025 00:58:58 +0000 (0:00:07.348) 0:00:22.243 ********** 2025-05-17 01:01:13.144955 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:01:13.144978 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:01:13.144989 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:01:13.145000 | orchestrator | 2025-05-17 01:01:13.145010 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-17 01:01:13.145020 | orchestrator | Saturday 17 May 2025 00:59:01 +0000 (0:00:02.537) 0:00:24.781 ********** 2025-05-17 01:01:13.145029 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:13.145039 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:01:13.145049 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:01:13.145059 | orchestrator | 2025-05-17 01:01:13.145075 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-17 01:01:13.145085 | orchestrator | Saturday 17 May 2025 00:59:02 +0000 (0:00:01.111) 0:00:25.893 ********** 2025-05-17 01:01:13.145095 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:13.145105 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:01:13.145114 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:01:13.145124 | orchestrator | 2025-05-17 01:01:13.145134 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-17 01:01:13.145143 | orchestrator | Saturday 17 May 2025 00:59:03 +0000 (0:00:00.442) 0:00:26.335 ********** 2025-05-17 01:01:13.145153 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:13.145162 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:01:13.145172 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:01:13.145181 | orchestrator | 2025-05-17 01:01:13.145191 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-17 01:01:13.145200 | orchestrator | Saturday 17 May 2025 00:59:03 +0000 (0:00:00.422) 0:00:26.757 ********** 2025-05-17 01:01:13.145220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.145232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 01:01:13.145243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.145260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 01:01:13.145281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.145293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-17 01:01:13.145304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.145314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.145331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.145341 | orchestrator | 2025-05-17 01:01:13.145351 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-17 01:01:13.145360 | orchestrator | Saturday 17 May 2025 00:59:06 +0000 (0:00:02.876) 0:00:29.633 ********** 2025-05-17 01:01:13.145370 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:13.145380 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:01:13.145389 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:01:13.145399 | orchestrator | 2025-05-17 01:01:13.145408 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-17 01:01:13.145418 | orchestrator | Saturday 17 May 2025 00:59:06 +0000 (0:00:00.442) 0:00:30.076 ********** 2025-05-17 01:01:13.145428 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-17 01:01:13.145438 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-17 01:01:13.145453 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-17 01:01:13.145463 | orchestrator | 2025-05-17 01:01:13.145472 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-17 01:01:13.145482 | orchestrator | Saturday 17 May 2025 00:59:09 +0000 (0:00:02.628) 0:00:32.704 ********** 2025-05-17 01:01:13.145492 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-17 01:01:13.145501 | orchestrator | 2025-05-17 01:01:13.145511 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-17 01:01:13.145521 | orchestrator | Saturday 17 May 2025 00:59:10 +0000 (0:00:00.820) 0:00:33.525 ********** 2025-05-17 01:01:13.145530 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:13.145540 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:01:13.145549 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:01:13.145559 | orchestrator | 2025-05-17 01:01:13.145568 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-17 01:01:13.145578 | orchestrator | Saturday 17 May 2025 00:59:11 +0000 (0:00:01.710) 0:00:35.235 ********** 2025-05-17 01:01:13.145587 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-17 01:01:13.145597 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-17 01:01:13.145606 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-17 01:01:13.145616 | orchestrator | 2025-05-17 01:01:13.145630 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-17 01:01:13.145640 | orchestrator | Saturday 17 May 2025 00:59:13 +0000 (0:00:01.524) 0:00:36.760 ********** 2025-05-17 01:01:13.145650 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:13.145660 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:01:13.145670 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:01:13.145679 | orchestrator | 2025-05-17 01:01:13.145689 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-17 01:01:13.145699 | orchestrator | Saturday 17 May 2025 00:59:13 +0000 (0:00:00.258) 0:00:37.018 ********** 2025-05-17 01:01:13.145708 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-17 01:01:13.145718 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-17 01:01:13.145727 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-17 01:01:13.145743 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-17 01:01:13.145752 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-17 01:01:13.145762 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-17 01:01:13.145772 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-17 01:01:13.145782 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-17 01:01:13.145791 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-17 01:01:13.145801 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-17 01:01:13.145810 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-17 01:01:13.145820 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-17 01:01:13.145829 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-17 01:01:13.145838 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-17 01:01:13.145848 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-17 01:01:13.145874 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-17 01:01:13.145884 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-17 01:01:13.145894 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-17 01:01:13.145903 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-17 01:01:13.145913 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-17 01:01:13.145922 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-17 01:01:13.145932 | orchestrator | 2025-05-17 01:01:13.145941 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-17 01:01:13.145951 | orchestrator | Saturday 17 May 2025 00:59:23 +0000 (0:00:10.119) 0:00:47.138 ********** 2025-05-17 01:01:13.145960 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-17 01:01:13.145969 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-17 01:01:13.145979 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-17 01:01:13.145988 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-17 01:01:13.145998 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-17 01:01:13.146013 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-17 01:01:13.146058 | orchestrator | 2025-05-17 01:01:13.146068 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-17 01:01:13.146078 | orchestrator | Saturday 17 May 2025 00:59:26 +0000 (0:00:03.120) 0:00:50.259 ********** 2025-05-17 01:01:13.146093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.146111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.146123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-17 01:01:13.146134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-17 01:01:13.146153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-17 01:01:13.146168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-17 01:01:13.146185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.146195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.146205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-17 01:01:13.146215 | orchestrator | 2025-05-17 01:01:13.146225 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-17 01:01:13.146234 | orchestrator | Saturday 17 May 2025 00:59:29 +0000 (0:00:02.647) 0:00:52.906 ********** 2025-05-17 01:01:13.146244 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:13.146254 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:01:13.146264 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:01:13.146273 | orchestrator | 2025-05-17 01:01:13.146283 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-17 01:01:13.146293 | orchestrator | Saturday 17 May 2025 00:59:29 +0000 (0:00:00.273) 0:00:53.179 ********** 2025-05-17 01:01:13.146302 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:01:13.146311 | orchestrator | 2025-05-17 01:01:13.146321 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-17 01:01:13.146330 | orchestrator | Saturday 17 May 2025 00:59:32 +0000 (0:00:02.450) 0:00:55.629 ********** 2025-05-17 01:01:13.146340 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:01:13.146350 | orchestrator | 2025-05-17 01:01:13.146359 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-17 01:01:13.146368 | orchestrator | Saturday 17 May 2025 00:59:34 +0000 (0:00:02.201) 0:00:57.830 ********** 2025-05-17 01:01:13.146378 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:13.146388 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:01:13.146397 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:01:13.146407 | orchestrator | 2025-05-17 01:01:13.146417 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-17 01:01:13.146432 | orchestrator | Saturday 17 May 2025 00:59:35 +0000 (0:00:00.885) 0:00:58.716 ********** 2025-05-17 01:01:13.146441 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:13.146456 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:01:13.146466 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:01:13.146476 | orchestrator | 2025-05-17 01:01:13.146486 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-17 01:01:13.146496 | orchestrator | Saturday 17 May 2025 00:59:35 +0000 (0:00:00.294) 0:00:59.011 ********** 2025-05-17 01:01:13.146505 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:13.146515 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:01:13.146525 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:01:13.146534 | orchestrator | 2025-05-17 01:01:13.146544 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-17 01:01:13.146553 | orchestrator | Saturday 17 May 2025 00:59:36 +0000 (0:00:00.425) 0:00:59.436 ********** 2025-05-17 01:01:13.146563 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:01:13.146572 | orchestrator | 2025-05-17 01:01:13.146582 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-17 01:01:13.146592 | orchestrator | Saturday 17 May 2025 00:59:49 +0000 (0:00:13.033) 0:01:12.470 ********** 2025-05-17 01:01:13.146601 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:01:13.146611 | orchestrator | 2025-05-17 01:01:13.146620 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-17 01:01:13.146634 | orchestrator | Saturday 17 May 2025 00:59:58 +0000 (0:00:08.966) 0:01:21.436 ********** 2025-05-17 01:01:13.146644 | orchestrator | 2025-05-17 01:01:13.146654 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-17 01:01:13.146663 | orchestrator | Saturday 17 May 2025 00:59:58 +0000 (0:00:00.052) 0:01:21.489 ********** 2025-05-17 01:01:13.146673 | orchestrator | 2025-05-17 01:01:13.146683 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-17 01:01:13.146692 | orchestrator | Saturday 17 May 2025 00:59:58 +0000 (0:00:00.054) 0:01:21.544 ********** 2025-05-17 01:01:13.146702 | orchestrator | 2025-05-17 01:01:13.146711 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-17 01:01:13.146721 | orchestrator | Saturday 17 May 2025 00:59:58 +0000 (0:00:00.055) 0:01:21.599 ********** 2025-05-17 01:01:13.146730 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:01:13.146740 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:01:13.146750 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:01:13.146759 | orchestrator | 2025-05-17 01:01:13.146769 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-17 01:01:13.146778 | orchestrator | Saturday 17 May 2025 01:00:12 +0000 (0:00:14.349) 0:01:35.949 ********** 2025-05-17 01:01:13.146788 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:01:13.146798 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:01:13.146807 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:01:13.146817 | orchestrator | 2025-05-17 01:01:13.146826 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-17 01:01:13.146836 | orchestrator | Saturday 17 May 2025 01:00:23 +0000 (0:00:10.444) 0:01:46.394 ********** 2025-05-17 01:01:13.146846 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:01:13.146855 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:01:13.146879 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:01:13.146888 | orchestrator | 2025-05-17 01:01:13.146898 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-17 01:01:13.147071 | orchestrator | Saturday 17 May 2025 01:00:28 +0000 (0:00:05.516) 0:01:51.910 ********** 2025-05-17 01:01:13.147084 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:01:13.147094 | orchestrator | 2025-05-17 01:01:13.147104 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-17 01:01:13.147114 | orchestrator | Saturday 17 May 2025 01:00:29 +0000 (0:00:00.731) 0:01:52.642 ********** 2025-05-17 01:01:13.147129 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:01:13.147139 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:13.147149 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:01:13.147158 | orchestrator | 2025-05-17 01:01:13.147168 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-17 01:01:13.147178 | orchestrator | Saturday 17 May 2025 01:00:30 +0000 (0:00:01.143) 0:01:53.785 ********** 2025-05-17 01:01:13.147187 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:01:13.147196 | orchestrator | 2025-05-17 01:01:13.147206 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-17 01:01:13.147216 | orchestrator | Saturday 17 May 2025 01:00:31 +0000 (0:00:01.483) 0:01:55.269 ********** 2025-05-17 01:01:13.147225 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-17 01:01:13.147235 | orchestrator | 2025-05-17 01:01:13.147245 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-17 01:01:13.147254 | orchestrator | Saturday 17 May 2025 01:00:40 +0000 (0:00:08.372) 0:02:03.641 ********** 2025-05-17 01:01:13.147264 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-17 01:01:13.147273 | orchestrator | 2025-05-17 01:01:13.147283 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-17 01:01:13.147292 | orchestrator | Saturday 17 May 2025 01:00:59 +0000 (0:00:18.869) 0:02:22.510 ********** 2025-05-17 01:01:13.147301 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-17 01:01:13.147311 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-17 01:01:13.147321 | orchestrator | 2025-05-17 01:01:13.147330 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-17 01:01:13.147340 | orchestrator | Saturday 17 May 2025 01:01:05 +0000 (0:00:06.206) 0:02:28.717 ********** 2025-05-17 01:01:13.147350 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:13.147359 | orchestrator | 2025-05-17 01:01:13.147369 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-17 01:01:13.147378 | orchestrator | Saturday 17 May 2025 01:01:05 +0000 (0:00:00.128) 0:02:28.845 ********** 2025-05-17 01:01:13.147388 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:13.147397 | orchestrator | 2025-05-17 01:01:13.147407 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-17 01:01:13.147452 | orchestrator | Saturday 17 May 2025 01:01:05 +0000 (0:00:00.119) 0:02:28.965 ********** 2025-05-17 01:01:13.147465 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:13.147475 | orchestrator | 2025-05-17 01:01:13.147485 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-17 01:01:13.147494 | orchestrator | Saturday 17 May 2025 01:01:05 +0000 (0:00:00.110) 0:02:29.075 ********** 2025-05-17 01:01:13.147504 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:13.147514 | orchestrator | 2025-05-17 01:01:13.147523 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-17 01:01:13.147532 | orchestrator | Saturday 17 May 2025 01:01:06 +0000 (0:00:00.443) 0:02:29.519 ********** 2025-05-17 01:01:13.147542 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:13.147552 | orchestrator | 2025-05-17 01:01:13.147561 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-17 01:01:13.147571 | orchestrator | Saturday 17 May 2025 01:01:09 +0000 (0:00:03.161) 0:02:32.680 ********** 2025-05-17 01:01:13.147580 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:13.147590 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:01:13.147600 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:01:13.147609 | orchestrator | 2025-05-17 01:01:13.147619 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:01:13.147634 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-17 01:01:13.147652 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-17 01:01:13.147662 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-17 01:01:13.147672 | orchestrator | 2025-05-17 01:01:13.147681 | orchestrator | 2025-05-17 01:01:13.147691 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:01:13.147700 | orchestrator | Saturday 17 May 2025 01:01:09 +0000 (0:00:00.547) 0:02:33.227 ********** 2025-05-17 01:01:13.147710 | orchestrator | =============================================================================== 2025-05-17 01:01:13.147719 | orchestrator | service-ks-register : keystone | Creating services --------------------- 18.87s 2025-05-17 01:01:13.147729 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 14.35s 2025-05-17 01:01:13.147738 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.03s 2025-05-17 01:01:13.147748 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.44s 2025-05-17 01:01:13.147757 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 10.12s 2025-05-17 01:01:13.147766 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 8.97s 2025-05-17 01:01:13.147805 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 8.37s 2025-05-17 01:01:13.147816 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 7.35s 2025-05-17 01:01:13.147826 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.21s 2025-05-17 01:01:13.147835 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.52s 2025-05-17 01:01:13.147938 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.23s 2025-05-17 01:01:13.147949 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.21s 2025-05-17 01:01:13.147959 | orchestrator | keystone : Creating default user role ----------------------------------- 3.16s 2025-05-17 01:01:13.147969 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.12s 2025-05-17 01:01:13.148004 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.88s 2025-05-17 01:01:13.148015 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.65s 2025-05-17 01:01:13.148025 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.63s 2025-05-17 01:01:13.148035 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.54s 2025-05-17 01:01:13.148069 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.46s 2025-05-17 01:01:13.148081 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.45s 2025-05-17 01:01:13.148091 | orchestrator | 2025-05-17 01:01:13 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:01:13.148101 | orchestrator | 2025-05-17 01:01:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:16.178672 | orchestrator | 2025-05-17 01:01:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:16.179139 | orchestrator | 2025-05-17 01:01:16 | INFO  | Task cb5f40de-0cfe-4045-b6a9-6a9d563508f8 is in state STARTED 2025-05-17 01:01:16.180089 | orchestrator | 2025-05-17 01:01:16 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:01:16.181110 | orchestrator | 2025-05-17 01:01:16 | INFO  | Task a80bc4bc-27aa-44b4-b7de-53415729df85 is in state STARTED 2025-05-17 01:01:16.182123 | orchestrator | 2025-05-17 01:01:16 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:01:16.182923 | orchestrator | 2025-05-17 01:01:16 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state STARTED 2025-05-17 01:01:16.185224 | orchestrator | 2025-05-17 01:01:16 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:01:16.185301 | orchestrator | 2025-05-17 01:01:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:19.219994 | orchestrator | 2025-05-17 01:01:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:19.220847 | orchestrator | 2025-05-17 01:01:19 | INFO  | Task cb5f40de-0cfe-4045-b6a9-6a9d563508f8 is in state STARTED 2025-05-17 01:01:19.221571 | orchestrator | 2025-05-17 01:01:19 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:01:19.222644 | orchestrator | 2025-05-17 01:01:19 | INFO  | Task a80bc4bc-27aa-44b4-b7de-53415729df85 is in state STARTED 2025-05-17 01:01:19.223680 | orchestrator | 2025-05-17 01:01:19 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:01:19.224572 | orchestrator | 2025-05-17 01:01:19 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state STARTED 2025-05-17 01:01:19.225892 | orchestrator | 2025-05-17 01:01:19 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:01:19.225920 | orchestrator | 2025-05-17 01:01:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:22.273490 | orchestrator | 2025-05-17 01:01:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:22.275132 | orchestrator | 2025-05-17 01:01:22 | INFO  | Task cb5f40de-0cfe-4045-b6a9-6a9d563508f8 is in state STARTED 2025-05-17 01:01:22.278484 | orchestrator | 2025-05-17 01:01:22 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:01:22.280125 | orchestrator | 2025-05-17 01:01:22 | INFO  | Task a80bc4bc-27aa-44b4-b7de-53415729df85 is in state STARTED 2025-05-17 01:01:22.282113 | orchestrator | 2025-05-17 01:01:22 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:01:22.283104 | orchestrator | 2025-05-17 01:01:22 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state STARTED 2025-05-17 01:01:22.284965 | orchestrator | 2025-05-17 01:01:22 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:01:22.285190 | orchestrator | 2025-05-17 01:01:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:25.339288 | orchestrator | 2025-05-17 01:01:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:25.339598 | orchestrator | 2025-05-17 01:01:25 | INFO  | Task cb5f40de-0cfe-4045-b6a9-6a9d563508f8 is in state STARTED 2025-05-17 01:01:25.340979 | orchestrator | 2025-05-17 01:01:25 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:01:25.344609 | orchestrator | 2025-05-17 01:01:25 | INFO  | Task a80bc4bc-27aa-44b4-b7de-53415729df85 is in state STARTED 2025-05-17 01:01:25.346287 | orchestrator | 2025-05-17 01:01:25 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:01:25.348444 | orchestrator | 2025-05-17 01:01:25 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state STARTED 2025-05-17 01:01:25.350562 | orchestrator | 2025-05-17 01:01:25 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:01:25.350651 | orchestrator | 2025-05-17 01:01:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:28.409279 | orchestrator | 2025-05-17 01:01:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:28.410815 | orchestrator | 2025-05-17 01:01:28 | INFO  | Task cb5f40de-0cfe-4045-b6a9-6a9d563508f8 is in state STARTED 2025-05-17 01:01:28.412635 | orchestrator | 2025-05-17 01:01:28 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:01:28.414339 | orchestrator | 2025-05-17 01:01:28 | INFO  | Task a80bc4bc-27aa-44b4-b7de-53415729df85 is in state SUCCESS 2025-05-17 01:01:28.415770 | orchestrator | 2025-05-17 01:01:28.415805 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-17 01:01:28.415817 | orchestrator | 2025-05-17 01:01:28.415828 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-05-17 01:01:28.415840 | orchestrator | 2025-05-17 01:01:28.415884 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-17 01:01:28.415897 | orchestrator | Saturday 17 May 2025 01:01:00 +0000 (0:00:00.439) 0:00:00.439 ********** 2025-05-17 01:01:28.415909 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-05-17 01:01:28.415920 | orchestrator | 2025-05-17 01:01:28.415931 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-17 01:01:28.415942 | orchestrator | Saturday 17 May 2025 01:01:00 +0000 (0:00:00.204) 0:00:00.643 ********** 2025-05-17 01:01:28.415954 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-17 01:01:28.415965 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-17 01:01:28.415976 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-17 01:01:28.415987 | orchestrator | 2025-05-17 01:01:28.415997 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-17 01:01:28.416008 | orchestrator | Saturday 17 May 2025 01:01:01 +0000 (0:00:00.792) 0:00:01.435 ********** 2025-05-17 01:01:28.416019 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-05-17 01:01:28.416029 | orchestrator | 2025-05-17 01:01:28.416040 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-17 01:01:28.416050 | orchestrator | Saturday 17 May 2025 01:01:01 +0000 (0:00:00.219) 0:00:01.654 ********** 2025-05-17 01:01:28.416071 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:28.416093 | orchestrator | 2025-05-17 01:01:28.416112 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-17 01:01:28.416132 | orchestrator | Saturday 17 May 2025 01:01:02 +0000 (0:00:00.656) 0:00:02.311 ********** 2025-05-17 01:01:28.416154 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:28.416172 | orchestrator | 2025-05-17 01:01:28.416209 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-17 01:01:28.416221 | orchestrator | Saturday 17 May 2025 01:01:02 +0000 (0:00:00.120) 0:00:02.432 ********** 2025-05-17 01:01:28.416232 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:28.416243 | orchestrator | 2025-05-17 01:01:28.416254 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-17 01:01:28.416265 | orchestrator | Saturday 17 May 2025 01:01:02 +0000 (0:00:00.438) 0:00:02.871 ********** 2025-05-17 01:01:28.416276 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:28.416287 | orchestrator | 2025-05-17 01:01:28.416298 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-17 01:01:28.416308 | orchestrator | Saturday 17 May 2025 01:01:02 +0000 (0:00:00.132) 0:00:03.003 ********** 2025-05-17 01:01:28.416319 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:28.416330 | orchestrator | 2025-05-17 01:01:28.416341 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-17 01:01:28.416351 | orchestrator | Saturday 17 May 2025 01:01:02 +0000 (0:00:00.117) 0:00:03.120 ********** 2025-05-17 01:01:28.416362 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:28.416373 | orchestrator | 2025-05-17 01:01:28.416385 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-17 01:01:28.416420 | orchestrator | Saturday 17 May 2025 01:01:03 +0000 (0:00:00.154) 0:00:03.275 ********** 2025-05-17 01:01:28.416433 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.416462 | orchestrator | 2025-05-17 01:01:28.416475 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-17 01:01:28.416488 | orchestrator | Saturday 17 May 2025 01:01:03 +0000 (0:00:00.129) 0:00:03.404 ********** 2025-05-17 01:01:28.416516 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:28.416540 | orchestrator | 2025-05-17 01:01:28.416554 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-17 01:01:28.416567 | orchestrator | Saturday 17 May 2025 01:01:03 +0000 (0:00:00.142) 0:00:03.547 ********** 2025-05-17 01:01:28.416580 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-17 01:01:28.416593 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-17 01:01:28.416605 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-17 01:01:28.416615 | orchestrator | 2025-05-17 01:01:28.416626 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-17 01:01:28.416637 | orchestrator | Saturday 17 May 2025 01:01:04 +0000 (0:00:00.806) 0:00:04.354 ********** 2025-05-17 01:01:28.416648 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:28.416658 | orchestrator | 2025-05-17 01:01:28.416669 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-17 01:01:28.416680 | orchestrator | Saturday 17 May 2025 01:01:04 +0000 (0:00:00.253) 0:00:04.607 ********** 2025-05-17 01:01:28.416691 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-17 01:01:28.416701 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-17 01:01:28.416712 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-17 01:01:28.416723 | orchestrator | 2025-05-17 01:01:28.416734 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-17 01:01:28.416744 | orchestrator | Saturday 17 May 2025 01:01:06 +0000 (0:00:01.846) 0:00:06.454 ********** 2025-05-17 01:01:28.416755 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-17 01:01:28.416766 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-17 01:01:28.416777 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-17 01:01:28.416788 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.416798 | orchestrator | 2025-05-17 01:01:28.416809 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-17 01:01:28.416833 | orchestrator | Saturday 17 May 2025 01:01:06 +0000 (0:00:00.407) 0:00:06.862 ********** 2025-05-17 01:01:28.416846 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-17 01:01:28.416923 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-17 01:01:28.416935 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-17 01:01:28.417054 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.417070 | orchestrator | 2025-05-17 01:01:28.417082 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-17 01:01:28.417093 | orchestrator | Saturday 17 May 2025 01:01:07 +0000 (0:00:00.754) 0:00:07.617 ********** 2025-05-17 01:01:28.417106 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-17 01:01:28.417150 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-17 01:01:28.417173 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-17 01:01:28.417194 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.417212 | orchestrator | 2025-05-17 01:01:28.417229 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-17 01:01:28.417241 | orchestrator | Saturday 17 May 2025 01:01:07 +0000 (0:00:00.176) 0:00:07.794 ********** 2025-05-17 01:01:28.417254 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '312390d96c3a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-17 01:01:04.974413', 'end': '2025-05-17 01:01:05.014519', 'delta': '0:00:00.040106', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['312390d96c3a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-17 01:01:28.417270 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '917e49e1246c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-17 01:01:05.524819', 'end': '2025-05-17 01:01:05.561346', 'delta': '0:00:00.036527', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['917e49e1246c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-17 01:01:28.417295 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '7d3fa5b9497f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-17 01:01:06.026150', 'end': '2025-05-17 01:01:06.062969', 'delta': '0:00:00.036819', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7d3fa5b9497f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-17 01:01:28.417308 | orchestrator | 2025-05-17 01:01:28.417319 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-17 01:01:28.417329 | orchestrator | Saturday 17 May 2025 01:01:07 +0000 (0:00:00.211) 0:00:08.005 ********** 2025-05-17 01:01:28.417340 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:28.417351 | orchestrator | 2025-05-17 01:01:28.417362 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-17 01:01:28.417373 | orchestrator | Saturday 17 May 2025 01:01:07 +0000 (0:00:00.233) 0:00:08.239 ********** 2025-05-17 01:01:28.417391 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-05-17 01:01:28.417402 | orchestrator | 2025-05-17 01:01:28.417413 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-17 01:01:28.417424 | orchestrator | Saturday 17 May 2025 01:01:09 +0000 (0:00:01.512) 0:00:09.751 ********** 2025-05-17 01:01:28.417435 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.417445 | orchestrator | 2025-05-17 01:01:28.417456 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-17 01:01:28.417467 | orchestrator | Saturday 17 May 2025 01:01:09 +0000 (0:00:00.128) 0:00:09.880 ********** 2025-05-17 01:01:28.417478 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.417489 | orchestrator | 2025-05-17 01:01:28.417504 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-17 01:01:28.417516 | orchestrator | Saturday 17 May 2025 01:01:09 +0000 (0:00:00.237) 0:00:10.117 ********** 2025-05-17 01:01:28.417527 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.417538 | orchestrator | 2025-05-17 01:01:28.417548 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-17 01:01:28.417559 | orchestrator | Saturday 17 May 2025 01:01:09 +0000 (0:00:00.125) 0:00:10.243 ********** 2025-05-17 01:01:28.417570 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:28.417581 | orchestrator | 2025-05-17 01:01:28.417592 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-17 01:01:28.417602 | orchestrator | Saturday 17 May 2025 01:01:10 +0000 (0:00:00.138) 0:00:10.381 ********** 2025-05-17 01:01:28.417613 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.417624 | orchestrator | 2025-05-17 01:01:28.417634 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-17 01:01:28.417645 | orchestrator | Saturday 17 May 2025 01:01:10 +0000 (0:00:00.225) 0:00:10.607 ********** 2025-05-17 01:01:28.417656 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.417666 | orchestrator | 2025-05-17 01:01:28.417677 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-17 01:01:28.417690 | orchestrator | Saturday 17 May 2025 01:01:10 +0000 (0:00:00.187) 0:00:10.794 ********** 2025-05-17 01:01:28.417704 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.417717 | orchestrator | 2025-05-17 01:01:28.417730 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-17 01:01:28.417743 | orchestrator | Saturday 17 May 2025 01:01:10 +0000 (0:00:00.134) 0:00:10.928 ********** 2025-05-17 01:01:28.417756 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.417768 | orchestrator | 2025-05-17 01:01:28.417781 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-17 01:01:28.417794 | orchestrator | Saturday 17 May 2025 01:01:10 +0000 (0:00:00.121) 0:00:11.049 ********** 2025-05-17 01:01:28.417807 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.417820 | orchestrator | 2025-05-17 01:01:28.417833 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-17 01:01:28.417846 | orchestrator | Saturday 17 May 2025 01:01:10 +0000 (0:00:00.119) 0:00:11.169 ********** 2025-05-17 01:01:28.417890 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.417904 | orchestrator | 2025-05-17 01:01:28.417916 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-17 01:01:28.417929 | orchestrator | Saturday 17 May 2025 01:01:11 +0000 (0:00:00.140) 0:00:11.309 ********** 2025-05-17 01:01:28.417942 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.417955 | orchestrator | 2025-05-17 01:01:28.417967 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-17 01:01:28.417980 | orchestrator | Saturday 17 May 2025 01:01:11 +0000 (0:00:00.332) 0:00:11.641 ********** 2025-05-17 01:01:28.417992 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.418005 | orchestrator | 2025-05-17 01:01:28.418139 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-17 01:01:28.418173 | orchestrator | Saturday 17 May 2025 01:01:11 +0000 (0:00:00.116) 0:00:11.758 ********** 2025-05-17 01:01:28.418360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:01:28.418388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:01:28.418401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:01:28.418412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:01:28.418424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:01:28.418435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:01:28.418446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:01:28.418457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-17 01:01:28.418525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997', 'scsi-SQEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997-part1', 'scsi-SQEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997-part14', 'scsi-SQEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997-part15', 'scsi-SQEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997-part16', 'scsi-SQEMU_QEMU_HARDDISK_401efc10-68df-4215-9146-18eb1d7fe997-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 01:01:28.418553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-17-00-01-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-17 01:01:28.418571 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.418583 | orchestrator | 2025-05-17 01:01:28.418594 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-17 01:01:28.418605 | orchestrator | Saturday 17 May 2025 01:01:11 +0000 (0:00:00.241) 0:00:11.999 ********** 2025-05-17 01:01:28.418616 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.418627 | orchestrator | 2025-05-17 01:01:28.418638 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-17 01:01:28.418649 | orchestrator | Saturday 17 May 2025 01:01:12 +0000 (0:00:00.282) 0:00:12.282 ********** 2025-05-17 01:01:28.418659 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.418670 | orchestrator | 2025-05-17 01:01:28.418681 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-17 01:01:28.418692 | orchestrator | Saturday 17 May 2025 01:01:12 +0000 (0:00:00.159) 0:00:12.441 ********** 2025-05-17 01:01:28.418702 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.418713 | orchestrator | 2025-05-17 01:01:28.418724 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-17 01:01:28.418735 | orchestrator | Saturday 17 May 2025 01:01:12 +0000 (0:00:00.178) 0:00:12.620 ********** 2025-05-17 01:01:28.418745 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:28.418756 | orchestrator | 2025-05-17 01:01:28.418767 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-17 01:01:28.418777 | orchestrator | Saturday 17 May 2025 01:01:12 +0000 (0:00:00.496) 0:00:13.117 ********** 2025-05-17 01:01:28.418788 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:28.418799 | orchestrator | 2025-05-17 01:01:28.418810 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-17 01:01:28.418828 | orchestrator | Saturday 17 May 2025 01:01:13 +0000 (0:00:00.163) 0:00:13.280 ********** 2025-05-17 01:01:28.418838 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:28.418882 | orchestrator | 2025-05-17 01:01:28.418896 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-17 01:01:28.418906 | orchestrator | Saturday 17 May 2025 01:01:13 +0000 (0:00:00.484) 0:00:13.765 ********** 2025-05-17 01:01:28.418917 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:28.418928 | orchestrator | 2025-05-17 01:01:28.418938 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-17 01:01:28.418949 | orchestrator | Saturday 17 May 2025 01:01:13 +0000 (0:00:00.177) 0:00:13.943 ********** 2025-05-17 01:01:28.418960 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.418970 | orchestrator | 2025-05-17 01:01:28.418981 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-17 01:01:28.418992 | orchestrator | Saturday 17 May 2025 01:01:13 +0000 (0:00:00.247) 0:00:14.191 ********** 2025-05-17 01:01:28.419002 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.419013 | orchestrator | 2025-05-17 01:01:28.419024 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-17 01:01:28.419034 | orchestrator | Saturday 17 May 2025 01:01:14 +0000 (0:00:00.397) 0:00:14.588 ********** 2025-05-17 01:01:28.419045 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-17 01:01:28.419072 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-17 01:01:28.419084 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-17 01:01:28.419095 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.419105 | orchestrator | 2025-05-17 01:01:28.419116 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-17 01:01:28.419127 | orchestrator | Saturday 17 May 2025 01:01:14 +0000 (0:00:00.505) 0:00:15.094 ********** 2025-05-17 01:01:28.419137 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-17 01:01:28.419148 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-17 01:01:28.419159 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-17 01:01:28.419170 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.419181 | orchestrator | 2025-05-17 01:01:28.419202 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-17 01:01:28.419222 | orchestrator | Saturday 17 May 2025 01:01:15 +0000 (0:00:00.473) 0:00:15.567 ********** 2025-05-17 01:01:28.419241 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-17 01:01:28.419262 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-17 01:01:28.419281 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-17 01:01:28.419299 | orchestrator | 2025-05-17 01:01:28.419318 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-17 01:01:28.419330 | orchestrator | Saturday 17 May 2025 01:01:16 +0000 (0:00:01.208) 0:00:16.776 ********** 2025-05-17 01:01:28.419341 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-17 01:01:28.419352 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-17 01:01:28.419362 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-17 01:01:28.419373 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.419384 | orchestrator | 2025-05-17 01:01:28.419394 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-17 01:01:28.419405 | orchestrator | Saturday 17 May 2025 01:01:16 +0000 (0:00:00.227) 0:00:17.003 ********** 2025-05-17 01:01:28.419416 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-17 01:01:28.419426 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-17 01:01:28.419437 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-17 01:01:28.419448 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.419458 | orchestrator | 2025-05-17 01:01:28.419478 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-17 01:01:28.419489 | orchestrator | Saturday 17 May 2025 01:01:16 +0000 (0:00:00.181) 0:00:17.184 ********** 2025-05-17 01:01:28.419500 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-17 01:01:28.419511 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-17 01:01:28.419529 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-17 01:01:28.419540 | orchestrator | 2025-05-17 01:01:28.419551 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-17 01:01:28.419562 | orchestrator | Saturday 17 May 2025 01:01:17 +0000 (0:00:00.151) 0:00:17.336 ********** 2025-05-17 01:01:28.419573 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.419583 | orchestrator | 2025-05-17 01:01:28.419594 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-17 01:01:28.419605 | orchestrator | Saturday 17 May 2025 01:01:17 +0000 (0:00:00.117) 0:00:17.454 ********** 2025-05-17 01:01:28.419615 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:01:28.419626 | orchestrator | 2025-05-17 01:01:28.419637 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-17 01:01:28.419648 | orchestrator | Saturday 17 May 2025 01:01:17 +0000 (0:00:00.096) 0:00:17.551 ********** 2025-05-17 01:01:28.419658 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-17 01:01:28.419669 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-17 01:01:28.419680 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-17 01:01:28.419690 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-17 01:01:28.419701 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-17 01:01:28.419712 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-17 01:01:28.419722 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-17 01:01:28.419733 | orchestrator | 2025-05-17 01:01:28.419743 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-17 01:01:28.419754 | orchestrator | Saturday 17 May 2025 01:01:18 +0000 (0:00:00.939) 0:00:18.490 ********** 2025-05-17 01:01:28.419765 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-17 01:01:28.419775 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-17 01:01:28.419786 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-17 01:01:28.419796 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-17 01:01:28.419806 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-17 01:01:28.419817 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-17 01:01:28.419827 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-17 01:01:28.419838 | orchestrator | 2025-05-17 01:01:28.419874 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-05-17 01:01:28.419888 | orchestrator | Saturday 17 May 2025 01:01:19 +0000 (0:00:01.287) 0:00:19.777 ********** 2025-05-17 01:01:28.419899 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:01:28.419910 | orchestrator | 2025-05-17 01:01:28.419921 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-05-17 01:01:28.419932 | orchestrator | Saturday 17 May 2025 01:01:19 +0000 (0:00:00.443) 0:00:20.221 ********** 2025-05-17 01:01:28.419943 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-17 01:01:28.419953 | orchestrator | 2025-05-17 01:01:28.419964 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-05-17 01:01:28.419989 | orchestrator | Saturday 17 May 2025 01:01:20 +0000 (0:00:00.510) 0:00:20.731 ********** 2025-05-17 01:01:28.420008 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-05-17 01:01:28.420020 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-05-17 01:01:28.420031 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-05-17 01:01:28.420041 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-05-17 01:01:28.420052 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-05-17 01:01:28.420062 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-05-17 01:01:28.420073 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-05-17 01:01:28.420084 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-05-17 01:01:28.420095 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-05-17 01:01:28.420105 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-05-17 01:01:28.420116 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-05-17 01:01:28.420126 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-05-17 01:01:28.420137 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-05-17 01:01:28.420147 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-05-17 01:01:28.420158 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-05-17 01:01:28.420168 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-05-17 01:01:28.420179 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-05-17 01:01:28.420190 | orchestrator | 2025-05-17 01:01:28.420206 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:01:28.420217 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-17 01:01:28.420229 | orchestrator | 2025-05-17 01:01:28.420246 | orchestrator | 2025-05-17 01:01:28.420265 | orchestrator | 2025-05-17 01:01:28.420285 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:01:28.420305 | orchestrator | Saturday 17 May 2025 01:01:26 +0000 (0:00:06.180) 0:00:26.912 ********** 2025-05-17 01:01:28.420325 | orchestrator | =============================================================================== 2025-05-17 01:01:28.420343 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 6.18s 2025-05-17 01:01:28.420361 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.85s 2025-05-17 01:01:28.420373 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.51s 2025-05-17 01:01:28.420383 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.29s 2025-05-17 01:01:28.420394 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.21s 2025-05-17 01:01:28.420405 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.94s 2025-05-17 01:01:28.420415 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.81s 2025-05-17 01:01:28.420426 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.79s 2025-05-17 01:01:28.420436 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.76s 2025-05-17 01:01:28.420447 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.66s 2025-05-17 01:01:28.420458 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.51s 2025-05-17 01:01:28.420478 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.51s 2025-05-17 01:01:28.420489 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.50s 2025-05-17 01:01:28.420499 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.48s 2025-05-17 01:01:28.420510 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.47s 2025-05-17 01:01:28.420521 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.44s 2025-05-17 01:01:28.420531 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.44s 2025-05-17 01:01:28.420542 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.41s 2025-05-17 01:01:28.420553 | orchestrator | ceph-facts : set osd_pool_default_crush_rule fact ----------------------- 0.40s 2025-05-17 01:01:28.420563 | orchestrator | ceph-facts : resolve bluestore_wal_device link(s) ----------------------- 0.33s 2025-05-17 01:01:28.420574 | orchestrator | 2025-05-17 01:01:28 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:01:28.420585 | orchestrator | 2025-05-17 01:01:28 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state STARTED 2025-05-17 01:01:28.420596 | orchestrator | 2025-05-17 01:01:28 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:01:28.420607 | orchestrator | 2025-05-17 01:01:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:31.467475 | orchestrator | 2025-05-17 01:01:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:31.467571 | orchestrator | 2025-05-17 01:01:31 | INFO  | Task cb5f40de-0cfe-4045-b6a9-6a9d563508f8 is in state SUCCESS 2025-05-17 01:01:31.469915 | orchestrator | 2025-05-17 01:01:31 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:01:31.472502 | orchestrator | 2025-05-17 01:01:31 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:01:31.473898 | orchestrator | 2025-05-17 01:01:31 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state STARTED 2025-05-17 01:01:31.475075 | orchestrator | 2025-05-17 01:01:31 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:01:31.475364 | orchestrator | 2025-05-17 01:01:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:34.532306 | orchestrator | 2025-05-17 01:01:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:34.533964 | orchestrator | 2025-05-17 01:01:34 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:01:34.535882 | orchestrator | 2025-05-17 01:01:34 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:01:34.537186 | orchestrator | 2025-05-17 01:01:34 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state STARTED 2025-05-17 01:01:34.540271 | orchestrator | 2025-05-17 01:01:34 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state STARTED 2025-05-17 01:01:34.542460 | orchestrator | 2025-05-17 01:01:34 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:01:34.542500 | orchestrator | 2025-05-17 01:01:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:37.594319 | orchestrator | 2025-05-17 01:01:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:37.596810 | orchestrator | 2025-05-17 01:01:37 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:01:37.598948 | orchestrator | 2025-05-17 01:01:37 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:01:37.600922 | orchestrator | 2025-05-17 01:01:37 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state STARTED 2025-05-17 01:01:37.602392 | orchestrator | 2025-05-17 01:01:37 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state STARTED 2025-05-17 01:01:37.603334 | orchestrator | 2025-05-17 01:01:37 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:01:37.603817 | orchestrator | 2025-05-17 01:01:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:40.651194 | orchestrator | 2025-05-17 01:01:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:40.653199 | orchestrator | 2025-05-17 01:01:40 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:01:40.655030 | orchestrator | 2025-05-17 01:01:40 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:01:40.657304 | orchestrator | 2025-05-17 01:01:40 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state STARTED 2025-05-17 01:01:40.658528 | orchestrator | 2025-05-17 01:01:40 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state STARTED 2025-05-17 01:01:40.660027 | orchestrator | 2025-05-17 01:01:40 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:01:40.660053 | orchestrator | 2025-05-17 01:01:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:43.697786 | orchestrator | 2025-05-17 01:01:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:43.702312 | orchestrator | 2025-05-17 01:01:43 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:01:43.702380 | orchestrator | 2025-05-17 01:01:43 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:01:43.705260 | orchestrator | 2025-05-17 01:01:43 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state STARTED 2025-05-17 01:01:43.706350 | orchestrator | 2025-05-17 01:01:43 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state STARTED 2025-05-17 01:01:43.708403 | orchestrator | 2025-05-17 01:01:43 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:01:43.709263 | orchestrator | 2025-05-17 01:01:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:46.760049 | orchestrator | 2025-05-17 01:01:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:46.763937 | orchestrator | 2025-05-17 01:01:46 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:01:46.766123 | orchestrator | 2025-05-17 01:01:46 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:01:46.767827 | orchestrator | 2025-05-17 01:01:46 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state STARTED 2025-05-17 01:01:46.769446 | orchestrator | 2025-05-17 01:01:46 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state STARTED 2025-05-17 01:01:46.770329 | orchestrator | 2025-05-17 01:01:46 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:01:46.770572 | orchestrator | 2025-05-17 01:01:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:49.811527 | orchestrator | 2025-05-17 01:01:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:49.812872 | orchestrator | 2025-05-17 01:01:49 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:01:49.814259 | orchestrator | 2025-05-17 01:01:49 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:01:49.815393 | orchestrator | 2025-05-17 01:01:49 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state STARTED 2025-05-17 01:01:49.816930 | orchestrator | 2025-05-17 01:01:49 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state STARTED 2025-05-17 01:01:49.816976 | orchestrator | 2025-05-17 01:01:49 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:01:49.817715 | orchestrator | 2025-05-17 01:01:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:52.868503 | orchestrator | 2025-05-17 01:01:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:52.869131 | orchestrator | 2025-05-17 01:01:52 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:01:52.871270 | orchestrator | 2025-05-17 01:01:52 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:01:52.872681 | orchestrator | 2025-05-17 01:01:52 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state STARTED 2025-05-17 01:01:52.873760 | orchestrator | 2025-05-17 01:01:52 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state STARTED 2025-05-17 01:01:52.877505 | orchestrator | 2025-05-17 01:01:52 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:01:52.877561 | orchestrator | 2025-05-17 01:01:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:55.917523 | orchestrator | 2025-05-17 01:01:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:55.917637 | orchestrator | 2025-05-17 01:01:55 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:01:55.917976 | orchestrator | 2025-05-17 01:01:55 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:01:55.920567 | orchestrator | 2025-05-17 01:01:55 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state STARTED 2025-05-17 01:01:55.921290 | orchestrator | 2025-05-17 01:01:55 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state STARTED 2025-05-17 01:01:55.922438 | orchestrator | 2025-05-17 01:01:55 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:01:55.922511 | orchestrator | 2025-05-17 01:01:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:01:58.945725 | orchestrator | 2025-05-17 01:01:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:01:58.945928 | orchestrator | 2025-05-17 01:01:58 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:01:58.946220 | orchestrator | 2025-05-17 01:01:58 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:01:58.946717 | orchestrator | 2025-05-17 01:01:58 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state STARTED 2025-05-17 01:01:58.947352 | orchestrator | 2025-05-17 01:01:58 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state STARTED 2025-05-17 01:01:58.947866 | orchestrator | 2025-05-17 01:01:58 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:01:58.947909 | orchestrator | 2025-05-17 01:01:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:01.969513 | orchestrator | 2025-05-17 01:02:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:01.971688 | orchestrator | 2025-05-17 01:02:01 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:01.972062 | orchestrator | 2025-05-17 01:02:01 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:01.973387 | orchestrator | 2025-05-17 01:02:01 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state STARTED 2025-05-17 01:02:01.975423 | orchestrator | 2025-05-17 01:02:01 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state STARTED 2025-05-17 01:02:01.975457 | orchestrator | 2025-05-17 01:02:01 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:01.975470 | orchestrator | 2025-05-17 01:02:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:05.013951 | orchestrator | 2025-05-17 01:02:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:05.014522 | orchestrator | 2025-05-17 01:02:05 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:05.014946 | orchestrator | 2025-05-17 01:02:05 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:05.015263 | orchestrator | 2025-05-17 01:02:05 | INFO  | Task 514fa136-c702-4164-8aae-54884d29b344 is in state SUCCESS 2025-05-17 01:02:05.015495 | orchestrator | 2025-05-17 01:02:05.015505 | orchestrator | 2025-05-17 01:02:05.015509 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-17 01:02:05.015514 | orchestrator | 2025-05-17 01:02:05.015518 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-05-17 01:02:05.015535 | orchestrator | Saturday 17 May 2025 01:00:51 +0000 (0:00:00.140) 0:00:00.140 ********** 2025-05-17 01:02:05.015539 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-17 01:02:05.015543 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-17 01:02:05.015547 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-17 01:02:05.015551 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-17 01:02:05.015555 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-17 01:02:05.015559 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-17 01:02:05.015563 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-17 01:02:05.015567 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-17 01:02:05.015570 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-17 01:02:05.015574 | orchestrator | 2025-05-17 01:02:05.015578 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-05-17 01:02:05.015581 | orchestrator | Saturday 17 May 2025 01:00:54 +0000 (0:00:02.875) 0:00:03.016 ********** 2025-05-17 01:02:05.015585 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-17 01:02:05.015589 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-17 01:02:05.015593 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-17 01:02:05.015596 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-17 01:02:05.015600 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-17 01:02:05.015604 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-17 01:02:05.015607 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-17 01:02:05.015611 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-17 01:02:05.015615 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-17 01:02:05.015618 | orchestrator | 2025-05-17 01:02:05.015622 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-05-17 01:02:05.015626 | orchestrator | Saturday 17 May 2025 01:00:54 +0000 (0:00:00.229) 0:00:03.245 ********** 2025-05-17 01:02:05.015643 | orchestrator | ok: [testbed-manager] => { 2025-05-17 01:02:05.015650 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-05-17 01:02:05.015656 | orchestrator | } 2025-05-17 01:02:05.015660 | orchestrator | 2025-05-17 01:02:05.015664 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-05-17 01:02:05.015668 | orchestrator | Saturday 17 May 2025 01:00:54 +0000 (0:00:00.165) 0:00:03.410 ********** 2025-05-17 01:02:05.015672 | orchestrator | changed: [testbed-manager] 2025-05-17 01:02:05.015676 | orchestrator | 2025-05-17 01:02:05.015679 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-05-17 01:02:05.015683 | orchestrator | Saturday 17 May 2025 01:01:27 +0000 (0:00:32.376) 0:00:35.787 ********** 2025-05-17 01:02:05.015688 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-05-17 01:02:05.015692 | orchestrator | 2025-05-17 01:02:05.015696 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-05-17 01:02:05.015700 | orchestrator | Saturday 17 May 2025 01:01:27 +0000 (0:00:00.453) 0:00:36.240 ********** 2025-05-17 01:02:05.015705 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-05-17 01:02:05.015711 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-05-17 01:02:05.015715 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-05-17 01:02:05.015719 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-05-17 01:02:05.015723 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-05-17 01:02:05.015732 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-05-17 01:02:05.015739 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-05-17 01:02:05.015743 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-05-17 01:02:05.015746 | orchestrator | 2025-05-17 01:02:05.015750 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-05-17 01:02:05.015754 | orchestrator | Saturday 17 May 2025 01:01:30 +0000 (0:00:02.880) 0:00:39.120 ********** 2025-05-17 01:02:05.015758 | orchestrator | skipping: [testbed-manager] 2025-05-17 01:02:05.015762 | orchestrator | 2025-05-17 01:02:05.015765 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:02:05.015770 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-17 01:02:05.015774 | orchestrator | 2025-05-17 01:02:05.015778 | orchestrator | Saturday 17 May 2025 01:01:30 +0000 (0:00:00.030) 0:00:39.151 ********** 2025-05-17 01:02:05.015782 | orchestrator | =============================================================================== 2025-05-17 01:02:05.015786 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 32.38s 2025-05-17 01:02:05.015793 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.88s 2025-05-17 01:02:05.015797 | orchestrator | Check ceph keys --------------------------------------------------------- 2.88s 2025-05-17 01:02:05.015800 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.45s 2025-05-17 01:02:05.015804 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.23s 2025-05-17 01:02:05.015808 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.17s 2025-05-17 01:02:05.015864 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.03s 2025-05-17 01:02:05.015868 | orchestrator | 2025-05-17 01:02:05.015873 | orchestrator | 2025-05-17 01:02:05 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state STARTED 2025-05-17 01:02:05.016380 | orchestrator | 2025-05-17 01:02:05 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:05.016388 | orchestrator | 2025-05-17 01:02:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:08.057323 | orchestrator | 2025-05-17 01:02:08 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:08.057438 | orchestrator | 2025-05-17 01:02:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:08.057944 | orchestrator | 2025-05-17 01:02:08 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:08.058578 | orchestrator | 2025-05-17 01:02:08 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:08.059171 | orchestrator | 2025-05-17 01:02:08 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state STARTED 2025-05-17 01:02:08.059917 | orchestrator | 2025-05-17 01:02:08 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:08.059937 | orchestrator | 2025-05-17 01:02:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:11.098761 | orchestrator | 2025-05-17 01:02:11 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:11.098908 | orchestrator | 2025-05-17 01:02:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:11.099402 | orchestrator | 2025-05-17 01:02:11 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:11.101337 | orchestrator | 2025-05-17 01:02:11 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:11.101725 | orchestrator | 2025-05-17 01:02:11 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state STARTED 2025-05-17 01:02:11.102226 | orchestrator | 2025-05-17 01:02:11 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:11.102333 | orchestrator | 2025-05-17 01:02:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:14.131027 | orchestrator | 2025-05-17 01:02:14 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:14.131347 | orchestrator | 2025-05-17 01:02:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:14.131724 | orchestrator | 2025-05-17 01:02:14 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:14.132251 | orchestrator | 2025-05-17 01:02:14 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:14.132700 | orchestrator | 2025-05-17 01:02:14 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state STARTED 2025-05-17 01:02:14.133336 | orchestrator | 2025-05-17 01:02:14 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:14.133392 | orchestrator | 2025-05-17 01:02:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:17.166454 | orchestrator | 2025-05-17 01:02:17 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:17.166653 | orchestrator | 2025-05-17 01:02:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:17.166684 | orchestrator | 2025-05-17 01:02:17 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:17.167071 | orchestrator | 2025-05-17 01:02:17 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:17.167512 | orchestrator | 2025-05-17 01:02:17 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state STARTED 2025-05-17 01:02:17.168004 | orchestrator | 2025-05-17 01:02:17 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:17.168035 | orchestrator | 2025-05-17 01:02:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:20.217023 | orchestrator | 2025-05-17 01:02:20 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:20.219686 | orchestrator | 2025-05-17 01:02:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:20.219958 | orchestrator | 2025-05-17 01:02:20 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:20.220610 | orchestrator | 2025-05-17 01:02:20 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:20.222099 | orchestrator | 2025-05-17 01:02:20 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state STARTED 2025-05-17 01:02:20.229338 | orchestrator | 2025-05-17 01:02:20 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:20.229399 | orchestrator | 2025-05-17 01:02:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:23.244418 | orchestrator | 2025-05-17 01:02:23 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:23.244533 | orchestrator | 2025-05-17 01:02:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:23.244548 | orchestrator | 2025-05-17 01:02:23 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:23.245653 | orchestrator | 2025-05-17 01:02:23 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:23.245682 | orchestrator | 2025-05-17 01:02:23 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state STARTED 2025-05-17 01:02:23.245693 | orchestrator | 2025-05-17 01:02:23 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:23.245705 | orchestrator | 2025-05-17 01:02:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:26.273966 | orchestrator | 2025-05-17 01:02:26 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:26.274136 | orchestrator | 2025-05-17 01:02:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:26.274153 | orchestrator | 2025-05-17 01:02:26 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:26.278956 | orchestrator | 2025-05-17 01:02:26 | INFO  | Task 9c24f578-4f9f-4be9-8784-4c4180e689ca is in state STARTED 2025-05-17 01:02:26.279190 | orchestrator | 2025-05-17 01:02:26 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:26.280221 | orchestrator | 2025-05-17 01:02:26 | INFO  | Task 257c9eef-0dc6-448f-9921-e4fecdd04e26 is in state SUCCESS 2025-05-17 01:02:26.280609 | orchestrator | 2025-05-17 01:02:26 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:26.280758 | orchestrator | 2025-05-17 01:02:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:29.310984 | orchestrator | 2025-05-17 01:02:29 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:29.311109 | orchestrator | 2025-05-17 01:02:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:29.311139 | orchestrator | 2025-05-17 01:02:29 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:29.311699 | orchestrator | 2025-05-17 01:02:29 | INFO  | Task 9c24f578-4f9f-4be9-8784-4c4180e689ca is in state STARTED 2025-05-17 01:02:29.312510 | orchestrator | 2025-05-17 01:02:29 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:29.313042 | orchestrator | 2025-05-17 01:02:29 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:29.313079 | orchestrator | 2025-05-17 01:02:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:32.348897 | orchestrator | 2025-05-17 01:02:32 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:32.348999 | orchestrator | 2025-05-17 01:02:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:32.349023 | orchestrator | 2025-05-17 01:02:32 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:32.352893 | orchestrator | 2025-05-17 01:02:32 | INFO  | Task 9c24f578-4f9f-4be9-8784-4c4180e689ca is in state STARTED 2025-05-17 01:02:32.353114 | orchestrator | 2025-05-17 01:02:32 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:32.353594 | orchestrator | 2025-05-17 01:02:32 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:32.353623 | orchestrator | 2025-05-17 01:02:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:35.378691 | orchestrator | 2025-05-17 01:02:35 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:35.378803 | orchestrator | 2025-05-17 01:02:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:35.378905 | orchestrator | 2025-05-17 01:02:35 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:35.379435 | orchestrator | 2025-05-17 01:02:35 | INFO  | Task 9c24f578-4f9f-4be9-8784-4c4180e689ca is in state STARTED 2025-05-17 01:02:35.379969 | orchestrator | 2025-05-17 01:02:35 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:35.380546 | orchestrator | 2025-05-17 01:02:35 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:35.380568 | orchestrator | 2025-05-17 01:02:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:38.405459 | orchestrator | 2025-05-17 01:02:38 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:38.405580 | orchestrator | 2025-05-17 01:02:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:38.406002 | orchestrator | 2025-05-17 01:02:38 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:38.406672 | orchestrator | 2025-05-17 01:02:38 | INFO  | Task 9c24f578-4f9f-4be9-8784-4c4180e689ca is in state STARTED 2025-05-17 01:02:38.407131 | orchestrator | 2025-05-17 01:02:38 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:38.408013 | orchestrator | 2025-05-17 01:02:38 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:38.408073 | orchestrator | 2025-05-17 01:02:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:41.437378 | orchestrator | 2025-05-17 01:02:41 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:41.440407 | orchestrator | 2025-05-17 01:02:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:41.441006 | orchestrator | 2025-05-17 01:02:41 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:41.441440 | orchestrator | 2025-05-17 01:02:41 | INFO  | Task 9c24f578-4f9f-4be9-8784-4c4180e689ca is in state STARTED 2025-05-17 01:02:41.446674 | orchestrator | 2025-05-17 01:02:41 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:41.447439 | orchestrator | 2025-05-17 01:02:41 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:41.447472 | orchestrator | 2025-05-17 01:02:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:44.470421 | orchestrator | 2025-05-17 01:02:44 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:44.470539 | orchestrator | 2025-05-17 01:02:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:44.470962 | orchestrator | 2025-05-17 01:02:44 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:44.472287 | orchestrator | 2025-05-17 01:02:44 | INFO  | Task 9c24f578-4f9f-4be9-8784-4c4180e689ca is in state STARTED 2025-05-17 01:02:44.473324 | orchestrator | 2025-05-17 01:02:44 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:44.474123 | orchestrator | 2025-05-17 01:02:44 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:44.474159 | orchestrator | 2025-05-17 01:02:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:47.500887 | orchestrator | 2025-05-17 01:02:47 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:47.501236 | orchestrator | 2025-05-17 01:02:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:47.502262 | orchestrator | 2025-05-17 01:02:47 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:47.502936 | orchestrator | 2025-05-17 01:02:47 | INFO  | Task 9c24f578-4f9f-4be9-8784-4c4180e689ca is in state STARTED 2025-05-17 01:02:47.503947 | orchestrator | 2025-05-17 01:02:47 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:47.505838 | orchestrator | 2025-05-17 01:02:47 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:47.505881 | orchestrator | 2025-05-17 01:02:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:50.536013 | orchestrator | 2025-05-17 01:02:50 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:50.537114 | orchestrator | 2025-05-17 01:02:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:50.538722 | orchestrator | 2025-05-17 01:02:50 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:50.540056 | orchestrator | 2025-05-17 01:02:50 | INFO  | Task 9c24f578-4f9f-4be9-8784-4c4180e689ca is in state STARTED 2025-05-17 01:02:50.541437 | orchestrator | 2025-05-17 01:02:50 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:50.542598 | orchestrator | 2025-05-17 01:02:50 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:50.542661 | orchestrator | 2025-05-17 01:02:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:53.571794 | orchestrator | 2025-05-17 01:02:53 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:53.571984 | orchestrator | 2025-05-17 01:02:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:53.571999 | orchestrator | 2025-05-17 01:02:53 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:53.572011 | orchestrator | 2025-05-17 01:02:53 | INFO  | Task 9c24f578-4f9f-4be9-8784-4c4180e689ca is in state STARTED 2025-05-17 01:02:53.572022 | orchestrator | 2025-05-17 01:02:53 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:53.572033 | orchestrator | 2025-05-17 01:02:53 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:53.572043 | orchestrator | 2025-05-17 01:02:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:56.607433 | orchestrator | 2025-05-17 01:02:56 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:56.607553 | orchestrator | 2025-05-17 01:02:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:56.611618 | orchestrator | 2025-05-17 01:02:56 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:56.611702 | orchestrator | 2025-05-17 01:02:56 | INFO  | Task 9c24f578-4f9f-4be9-8784-4c4180e689ca is in state STARTED 2025-05-17 01:02:56.611922 | orchestrator | 2025-05-17 01:02:56 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:56.612279 | orchestrator | 2025-05-17 01:02:56 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:56.612307 | orchestrator | 2025-05-17 01:02:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:02:59.646407 | orchestrator | 2025-05-17 01:02:59 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:02:59.648213 | orchestrator | 2025-05-17 01:02:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:02:59.650896 | orchestrator | 2025-05-17 01:02:59 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:02:59.653062 | orchestrator | 2025-05-17 01:02:59 | INFO  | Task 9c24f578-4f9f-4be9-8784-4c4180e689ca is in state STARTED 2025-05-17 01:02:59.654581 | orchestrator | 2025-05-17 01:02:59 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:02:59.656971 | orchestrator | 2025-05-17 01:02:59 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:02:59.657069 | orchestrator | 2025-05-17 01:02:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:02.693180 | orchestrator | 2025-05-17 01:03:02 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:03:02.693603 | orchestrator | 2025-05-17 01:03:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:02.693638 | orchestrator | 2025-05-17 01:03:02 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:03:02.694391 | orchestrator | 2025-05-17 01:03:02 | INFO  | Task 9c24f578-4f9f-4be9-8784-4c4180e689ca is in state SUCCESS 2025-05-17 01:03:02.694585 | orchestrator | 2025-05-17 01:03:02.694604 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-17 01:03:02.694617 | orchestrator | 2025-05-17 01:03:02.694629 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-17 01:03:02.694667 | orchestrator | Saturday 17 May 2025 01:01:13 +0000 (0:00:00.191) 0:00:00.191 ********** 2025-05-17 01:03:02.694679 | orchestrator | changed: [localhost] 2025-05-17 01:03:02.694694 | orchestrator | 2025-05-17 01:03:02.694706 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-17 01:03:02.694717 | orchestrator | Saturday 17 May 2025 01:01:14 +0000 (0:00:00.558) 0:00:00.750 ********** 2025-05-17 01:03:02.694728 | orchestrator | changed: [localhost] 2025-05-17 01:03:02.694771 | orchestrator | 2025-05-17 01:03:02.694784 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-17 01:03:02.694796 | orchestrator | Saturday 17 May 2025 01:01:59 +0000 (0:00:44.694) 0:00:45.444 ********** 2025-05-17 01:03:02.694835 | orchestrator | changed: [localhost] 2025-05-17 01:03:02.694848 | orchestrator | 2025-05-17 01:03:02.694859 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 01:03:02.694870 | orchestrator | 2025-05-17 01:03:02.694881 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 01:03:02.694892 | orchestrator | Saturday 17 May 2025 01:02:03 +0000 (0:00:04.299) 0:00:49.743 ********** 2025-05-17 01:03:02.694903 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:03:02.694914 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:03:02.694924 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:03:02.694935 | orchestrator | 2025-05-17 01:03:02.694946 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 01:03:02.694957 | orchestrator | Saturday 17 May 2025 01:02:03 +0000 (0:00:00.396) 0:00:50.140 ********** 2025-05-17 01:03:02.694968 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-17 01:03:02.694979 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-17 01:03:02.694991 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-17 01:03:02.695002 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-17 01:03:02.695012 | orchestrator | 2025-05-17 01:03:02.695023 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-17 01:03:02.695034 | orchestrator | skipping: no hosts matched 2025-05-17 01:03:02.695047 | orchestrator | 2025-05-17 01:03:02.695058 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:03:02.695069 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:03:02.695083 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:03:02.695096 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:03:02.695106 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:03:02.695117 | orchestrator | 2025-05-17 01:03:02.695128 | orchestrator | 2025-05-17 01:03:02.695139 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:03:02.695150 | orchestrator | Saturday 17 May 2025 01:02:04 +0000 (0:00:00.583) 0:00:50.724 ********** 2025-05-17 01:03:02.695161 | orchestrator | =============================================================================== 2025-05-17 01:03:02.695171 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 44.69s 2025-05-17 01:03:02.695182 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.30s 2025-05-17 01:03:02.695193 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-05-17 01:03:02.695203 | orchestrator | Ensure the destination directory exists --------------------------------- 0.56s 2025-05-17 01:03:02.695217 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-05-17 01:03:02.695229 | orchestrator | 2025-05-17 01:03:02.695241 | orchestrator | 2025-05-17 01:03:02.695255 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-17 01:03:02.695279 | orchestrator | 2025-05-17 01:03:02.695292 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-17 01:03:02.695306 | orchestrator | Saturday 17 May 2025 01:01:34 +0000 (0:00:00.166) 0:00:00.166 ********** 2025-05-17 01:03:02.695319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-17 01:03:02.695332 | orchestrator | 2025-05-17 01:03:02.695344 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-17 01:03:02.695357 | orchestrator | Saturday 17 May 2025 01:01:34 +0000 (0:00:00.210) 0:00:00.376 ********** 2025-05-17 01:03:02.695388 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-17 01:03:02.695403 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-17 01:03:02.695416 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-17 01:03:02.695429 | orchestrator | 2025-05-17 01:03:02.695442 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-17 01:03:02.695455 | orchestrator | Saturday 17 May 2025 01:01:35 +0000 (0:00:01.216) 0:00:01.593 ********** 2025-05-17 01:03:02.695468 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-17 01:03:02.695481 | orchestrator | 2025-05-17 01:03:02.695494 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-17 01:03:02.695508 | orchestrator | Saturday 17 May 2025 01:01:36 +0000 (0:00:01.105) 0:00:02.699 ********** 2025-05-17 01:03:02.695539 | orchestrator | changed: [testbed-manager] 2025-05-17 01:03:02.695560 | orchestrator | 2025-05-17 01:03:02.695578 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-17 01:03:02.695596 | orchestrator | Saturday 17 May 2025 01:01:37 +0000 (0:00:00.893) 0:00:03.592 ********** 2025-05-17 01:03:02.695614 | orchestrator | changed: [testbed-manager] 2025-05-17 01:03:02.695631 | orchestrator | 2025-05-17 01:03:02.695649 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-17 01:03:02.695668 | orchestrator | Saturday 17 May 2025 01:01:38 +0000 (0:00:00.950) 0:00:04.543 ********** 2025-05-17 01:03:02.695687 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-17 01:03:02.695706 | orchestrator | ok: [testbed-manager] 2025-05-17 01:03:02.695724 | orchestrator | 2025-05-17 01:03:02.695741 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-17 01:03:02.695752 | orchestrator | Saturday 17 May 2025 01:02:16 +0000 (0:00:37.977) 0:00:42.521 ********** 2025-05-17 01:03:02.695763 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-17 01:03:02.695774 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-17 01:03:02.695785 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-17 01:03:02.695796 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-17 01:03:02.695833 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-17 01:03:02.695846 | orchestrator | 2025-05-17 01:03:02.695856 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-17 01:03:02.695867 | orchestrator | Saturday 17 May 2025 01:02:19 +0000 (0:00:03.058) 0:00:45.580 ********** 2025-05-17 01:03:02.695878 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-17 01:03:02.695888 | orchestrator | 2025-05-17 01:03:02.695899 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-17 01:03:02.695910 | orchestrator | Saturday 17 May 2025 01:02:19 +0000 (0:00:00.329) 0:00:45.909 ********** 2025-05-17 01:03:02.695920 | orchestrator | skipping: [testbed-manager] 2025-05-17 01:03:02.695931 | orchestrator | 2025-05-17 01:03:02.695942 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-17 01:03:02.695952 | orchestrator | Saturday 17 May 2025 01:02:20 +0000 (0:00:00.093) 0:00:46.003 ********** 2025-05-17 01:03:02.695973 | orchestrator | skipping: [testbed-manager] 2025-05-17 01:03:02.695984 | orchestrator | 2025-05-17 01:03:02.695995 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-17 01:03:02.696006 | orchestrator | Saturday 17 May 2025 01:02:20 +0000 (0:00:00.259) 0:00:46.263 ********** 2025-05-17 01:03:02.696016 | orchestrator | changed: [testbed-manager] 2025-05-17 01:03:02.696027 | orchestrator | 2025-05-17 01:03:02.696038 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-17 01:03:02.696048 | orchestrator | Saturday 17 May 2025 01:02:21 +0000 (0:00:01.253) 0:00:47.516 ********** 2025-05-17 01:03:02.696059 | orchestrator | changed: [testbed-manager] 2025-05-17 01:03:02.696070 | orchestrator | 2025-05-17 01:03:02.696080 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-17 01:03:02.696091 | orchestrator | Saturday 17 May 2025 01:02:22 +0000 (0:00:00.768) 0:00:48.285 ********** 2025-05-17 01:03:02.696102 | orchestrator | changed: [testbed-manager] 2025-05-17 01:03:02.696112 | orchestrator | 2025-05-17 01:03:02.696123 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-17 01:03:02.696133 | orchestrator | Saturday 17 May 2025 01:02:22 +0000 (0:00:00.453) 0:00:48.738 ********** 2025-05-17 01:03:02.696144 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-17 01:03:02.696155 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-17 01:03:02.696166 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-17 01:03:02.696177 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-17 01:03:02.696188 | orchestrator | 2025-05-17 01:03:02.696198 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:03:02.696209 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-17 01:03:02.696220 | orchestrator | 2025-05-17 01:03:02.696231 | orchestrator | Saturday 17 May 2025 01:02:23 +0000 (0:00:01.133) 0:00:49.872 ********** 2025-05-17 01:03:02.696241 | orchestrator | =============================================================================== 2025-05-17 01:03:02.696252 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.98s 2025-05-17 01:03:02.696262 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.06s 2025-05-17 01:03:02.696273 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.25s 2025-05-17 01:03:02.696287 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.22s 2025-05-17 01:03:02.696306 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.13s 2025-05-17 01:03:02.696324 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.11s 2025-05-17 01:03:02.696352 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.95s 2025-05-17 01:03:02.696371 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.89s 2025-05-17 01:03:02.696388 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.77s 2025-05-17 01:03:02.696406 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.45s 2025-05-17 01:03:02.696424 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.33s 2025-05-17 01:03:02.696444 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.26s 2025-05-17 01:03:02.696462 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2025-05-17 01:03:02.696480 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.09s 2025-05-17 01:03:02.696498 | orchestrator | 2025-05-17 01:03:02.696614 | orchestrator | 2025-05-17 01:03:02 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:02.700625 | orchestrator | 2025-05-17 01:03:02 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:02.700664 | orchestrator | 2025-05-17 01:03:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:05.736239 | orchestrator | 2025-05-17 01:03:05 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:03:05.736352 | orchestrator | 2025-05-17 01:03:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:05.736368 | orchestrator | 2025-05-17 01:03:05 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:03:05.736379 | orchestrator | 2025-05-17 01:03:05 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:05.737034 | orchestrator | 2025-05-17 01:03:05 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:05.737121 | orchestrator | 2025-05-17 01:03:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:08.775920 | orchestrator | 2025-05-17 01:03:08 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:03:08.776094 | orchestrator | 2025-05-17 01:03:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:08.776402 | orchestrator | 2025-05-17 01:03:08 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:03:08.777030 | orchestrator | 2025-05-17 01:03:08 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:08.778472 | orchestrator | 2025-05-17 01:03:08 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:08.778500 | orchestrator | 2025-05-17 01:03:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:11.816797 | orchestrator | 2025-05-17 01:03:11 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:03:11.818237 | orchestrator | 2025-05-17 01:03:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:11.820028 | orchestrator | 2025-05-17 01:03:11 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:03:11.821582 | orchestrator | 2025-05-17 01:03:11 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:11.822790 | orchestrator | 2025-05-17 01:03:11 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:11.823015 | orchestrator | 2025-05-17 01:03:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:14.871341 | orchestrator | 2025-05-17 01:03:14 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:03:14.871447 | orchestrator | 2025-05-17 01:03:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:14.871976 | orchestrator | 2025-05-17 01:03:14 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:03:14.872502 | orchestrator | 2025-05-17 01:03:14 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:14.873137 | orchestrator | 2025-05-17 01:03:14 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:14.873159 | orchestrator | 2025-05-17 01:03:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:17.913142 | orchestrator | 2025-05-17 01:03:17 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:03:17.913330 | orchestrator | 2025-05-17 01:03:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:17.913729 | orchestrator | 2025-05-17 01:03:17 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:03:17.914371 | orchestrator | 2025-05-17 01:03:17 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:17.915002 | orchestrator | 2025-05-17 01:03:17 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:17.915027 | orchestrator | 2025-05-17 01:03:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:20.949171 | orchestrator | 2025-05-17 01:03:20 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:03:20.949448 | orchestrator | 2025-05-17 01:03:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:20.950256 | orchestrator | 2025-05-17 01:03:20 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:03:20.950749 | orchestrator | 2025-05-17 01:03:20 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:20.951485 | orchestrator | 2025-05-17 01:03:20 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:20.951522 | orchestrator | 2025-05-17 01:03:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:23.986566 | orchestrator | 2025-05-17 01:03:23 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state STARTED 2025-05-17 01:03:23.988676 | orchestrator | 2025-05-17 01:03:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:23.988716 | orchestrator | 2025-05-17 01:03:23 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:03:23.989165 | orchestrator | 2025-05-17 01:03:23 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:23.990875 | orchestrator | 2025-05-17 01:03:23 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:23.990924 | orchestrator | 2025-05-17 01:03:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:27.044569 | orchestrator | 2025-05-17 01:03:27 | INFO  | Task ea4b45ac-0c09-4725-9076-23493117c996 is in state SUCCESS 2025-05-17 01:03:27.046578 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-17 01:03:27.046618 | orchestrator | 2025-05-17 01:03:27.046627 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-17 01:03:27.046635 | orchestrator | 2025-05-17 01:03:27.046641 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-17 01:03:27.046648 | orchestrator | Saturday 17 May 2025 01:02:26 +0000 (0:00:00.320) 0:00:00.320 ********** 2025-05-17 01:03:27.046660 | orchestrator | changed: [testbed-manager] 2025-05-17 01:03:27.046674 | orchestrator | 2025-05-17 01:03:27.046682 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-17 01:03:27.046688 | orchestrator | Saturday 17 May 2025 01:02:28 +0000 (0:00:01.751) 0:00:02.071 ********** 2025-05-17 01:03:27.046695 | orchestrator | changed: [testbed-manager] 2025-05-17 01:03:27.046702 | orchestrator | 2025-05-17 01:03:27.046708 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-17 01:03:27.046714 | orchestrator | Saturday 17 May 2025 01:02:29 +0000 (0:00:00.902) 0:00:02.974 ********** 2025-05-17 01:03:27.046720 | orchestrator | changed: [testbed-manager] 2025-05-17 01:03:27.046727 | orchestrator | 2025-05-17 01:03:27.046733 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-17 01:03:27.046739 | orchestrator | Saturday 17 May 2025 01:02:30 +0000 (0:00:00.817) 0:00:03.791 ********** 2025-05-17 01:03:27.046745 | orchestrator | changed: [testbed-manager] 2025-05-17 01:03:27.046751 | orchestrator | 2025-05-17 01:03:27.046757 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-17 01:03:27.046763 | orchestrator | Saturday 17 May 2025 01:02:30 +0000 (0:00:00.727) 0:00:04.519 ********** 2025-05-17 01:03:27.046769 | orchestrator | changed: [testbed-manager] 2025-05-17 01:03:27.046776 | orchestrator | 2025-05-17 01:03:27.046782 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-17 01:03:27.046883 | orchestrator | Saturday 17 May 2025 01:02:31 +0000 (0:00:00.899) 0:00:05.419 ********** 2025-05-17 01:03:27.046893 | orchestrator | changed: [testbed-manager] 2025-05-17 01:03:27.046899 | orchestrator | 2025-05-17 01:03:27.046906 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-17 01:03:27.046912 | orchestrator | Saturday 17 May 2025 01:02:32 +0000 (0:00:00.833) 0:00:06.252 ********** 2025-05-17 01:03:27.046918 | orchestrator | changed: [testbed-manager] 2025-05-17 01:03:27.046924 | orchestrator | 2025-05-17 01:03:27.046930 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-17 01:03:27.046936 | orchestrator | Saturday 17 May 2025 01:02:33 +0000 (0:00:01.236) 0:00:07.489 ********** 2025-05-17 01:03:27.046943 | orchestrator | changed: [testbed-manager] 2025-05-17 01:03:27.046949 | orchestrator | 2025-05-17 01:03:27.046955 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-17 01:03:27.046961 | orchestrator | Saturday 17 May 2025 01:02:35 +0000 (0:00:01.098) 0:00:08.587 ********** 2025-05-17 01:03:27.046967 | orchestrator | changed: [testbed-manager] 2025-05-17 01:03:27.046974 | orchestrator | 2025-05-17 01:03:27.046980 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-17 01:03:27.046986 | orchestrator | Saturday 17 May 2025 01:02:54 +0000 (0:00:18.997) 0:00:27.585 ********** 2025-05-17 01:03:27.046992 | orchestrator | skipping: [testbed-manager] 2025-05-17 01:03:27.046999 | orchestrator | 2025-05-17 01:03:27.047017 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-17 01:03:27.047023 | orchestrator | 2025-05-17 01:03:27.047030 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-17 01:03:27.047036 | orchestrator | Saturday 17 May 2025 01:02:54 +0000 (0:00:00.637) 0:00:28.222 ********** 2025-05-17 01:03:27.047042 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:03:27.047048 | orchestrator | 2025-05-17 01:03:27.047054 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-17 01:03:27.047061 | orchestrator | 2025-05-17 01:03:27.047067 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-17 01:03:27.047073 | orchestrator | Saturday 17 May 2025 01:02:56 +0000 (0:00:01.908) 0:00:30.130 ********** 2025-05-17 01:03:27.047079 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:03:27.047085 | orchestrator | 2025-05-17 01:03:27.047091 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-17 01:03:27.047098 | orchestrator | 2025-05-17 01:03:27.047104 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-17 01:03:27.047110 | orchestrator | Saturday 17 May 2025 01:02:58 +0000 (0:00:02.060) 0:00:32.191 ********** 2025-05-17 01:03:27.047116 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:03:27.047122 | orchestrator | 2025-05-17 01:03:27.047128 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:03:27.047136 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-17 01:03:27.047148 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:03:27.047159 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:03:27.047170 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:03:27.047182 | orchestrator | 2025-05-17 01:03:27.047194 | orchestrator | 2025-05-17 01:03:27.047205 | orchestrator | 2025-05-17 01:03:27.047216 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:03:27.047226 | orchestrator | Saturday 17 May 2025 01:02:59 +0000 (0:00:01.348) 0:00:33.540 ********** 2025-05-17 01:03:27.047234 | orchestrator | =============================================================================== 2025-05-17 01:03:27.047247 | orchestrator | Create admin user ------------------------------------------------------ 19.00s 2025-05-17 01:03:27.047264 | orchestrator | Restart ceph manager service -------------------------------------------- 5.32s 2025-05-17 01:03:27.047271 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.75s 2025-05-17 01:03:27.047277 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.24s 2025-05-17 01:03:27.047284 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.10s 2025-05-17 01:03:27.047290 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.90s 2025-05-17 01:03:27.047296 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.90s 2025-05-17 01:03:27.047302 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.83s 2025-05-17 01:03:27.047308 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.82s 2025-05-17 01:03:27.047314 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.73s 2025-05-17 01:03:27.047320 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.64s 2025-05-17 01:03:27.047327 | orchestrator | 2025-05-17 01:03:27.047333 | orchestrator | 2025-05-17 01:03:27.047339 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 01:03:27.047345 | orchestrator | 2025-05-17 01:03:27.047351 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 01:03:27.047358 | orchestrator | Saturday 17 May 2025 01:02:09 +0000 (0:00:00.213) 0:00:00.213 ********** 2025-05-17 01:03:27.047364 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:03:27.047371 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:03:27.047377 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:03:27.047384 | orchestrator | 2025-05-17 01:03:27.047390 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 01:03:27.047399 | orchestrator | Saturday 17 May 2025 01:02:10 +0000 (0:00:00.649) 0:00:00.863 ********** 2025-05-17 01:03:27.047409 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-17 01:03:27.047419 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-17 01:03:27.047430 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-17 01:03:27.047441 | orchestrator | 2025-05-17 01:03:27.047447 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-17 01:03:27.047453 | orchestrator | 2025-05-17 01:03:27.047459 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-17 01:03:27.047465 | orchestrator | Saturday 17 May 2025 01:02:11 +0000 (0:00:00.663) 0:00:01.527 ********** 2025-05-17 01:03:27.047471 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:03:27.047478 | orchestrator | 2025-05-17 01:03:27.047488 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-17 01:03:27.047498 | orchestrator | Saturday 17 May 2025 01:02:12 +0000 (0:00:01.692) 0:00:03.219 ********** 2025-05-17 01:03:27.047508 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-17 01:03:27.047518 | orchestrator | 2025-05-17 01:03:27.047528 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-17 01:03:27.047539 | orchestrator | Saturday 17 May 2025 01:02:16 +0000 (0:00:03.495) 0:00:06.714 ********** 2025-05-17 01:03:27.047556 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-17 01:03:27.047567 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-17 01:03:27.047577 | orchestrator | 2025-05-17 01:03:27.047583 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-17 01:03:27.047589 | orchestrator | Saturday 17 May 2025 01:02:22 +0000 (0:00:06.592) 0:00:13.307 ********** 2025-05-17 01:03:27.047596 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-17 01:03:27.047607 | orchestrator | 2025-05-17 01:03:27.047614 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-17 01:03:27.047620 | orchestrator | Saturday 17 May 2025 01:02:26 +0000 (0:00:03.667) 0:00:16.974 ********** 2025-05-17 01:03:27.047626 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-17 01:03:27.047632 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-17 01:03:27.047638 | orchestrator | 2025-05-17 01:03:27.047644 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-17 01:03:27.047650 | orchestrator | Saturday 17 May 2025 01:02:30 +0000 (0:00:03.987) 0:00:20.961 ********** 2025-05-17 01:03:27.047657 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-17 01:03:27.047663 | orchestrator | 2025-05-17 01:03:27.047669 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-17 01:03:27.047676 | orchestrator | Saturday 17 May 2025 01:02:34 +0000 (0:00:03.456) 0:00:24.422 ********** 2025-05-17 01:03:27.047682 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-17 01:03:27.047688 | orchestrator | 2025-05-17 01:03:27.047694 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-17 01:03:27.047700 | orchestrator | Saturday 17 May 2025 01:02:38 +0000 (0:00:04.217) 0:00:28.639 ********** 2025-05-17 01:03:27.047707 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:03:27.047713 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:03:27.047719 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:03:27.047725 | orchestrator | 2025-05-17 01:03:27.047732 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-17 01:03:27.047738 | orchestrator | Saturday 17 May 2025 01:02:38 +0000 (0:00:00.350) 0:00:28.990 ********** 2025-05-17 01:03:27.047755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:27.047765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:27.047776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:27.047787 | orchestrator | 2025-05-17 01:03:27.047815 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-17 01:03:27.047822 | orchestrator | Saturday 17 May 2025 01:02:40 +0000 (0:00:01.651) 0:00:30.641 ********** 2025-05-17 01:03:27.047828 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:03:27.047835 | orchestrator | 2025-05-17 01:03:27.047841 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-17 01:03:27.047847 | orchestrator | Saturday 17 May 2025 01:02:40 +0000 (0:00:00.352) 0:00:30.994 ********** 2025-05-17 01:03:27.047854 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:03:27.047860 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:03:27.047866 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:03:27.047872 | orchestrator | 2025-05-17 01:03:27.047879 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-17 01:03:27.047885 | orchestrator | Saturday 17 May 2025 01:02:41 +0000 (0:00:00.653) 0:00:31.647 ********** 2025-05-17 01:03:27.047891 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:03:27.047897 | orchestrator | 2025-05-17 01:03:27.047904 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-17 01:03:27.047910 | orchestrator | Saturday 17 May 2025 01:02:42 +0000 (0:00:00.847) 0:00:32.495 ********** 2025-05-17 01:03:27.047922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:27.047930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:27.047936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:27.047947 | orchestrator | 2025-05-17 01:03:27.047954 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-17 01:03:27.047960 | orchestrator | Saturday 17 May 2025 01:02:44 +0000 (0:00:02.190) 0:00:34.685 ********** 2025-05-17 01:03:27.047971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-17 01:03:27.047978 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:03:27.047984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-17 01:03:27.047995 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:03:27.048001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-17 01:03:27.048008 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:03:27.048018 | orchestrator | 2025-05-17 01:03:27.048024 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-17 01:03:27.048030 | orchestrator | Saturday 17 May 2025 01:02:45 +0000 (0:00:00.974) 0:00:35.660 ********** 2025-05-17 01:03:27.048037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-17 01:03:27.048043 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:03:27.048053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-17 01:03:27.048060 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:03:27.048066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-17 01:03:27.048073 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:03:27.048079 | orchestrator | 2025-05-17 01:03:27.048089 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-17 01:03:27.048096 | orchestrator | Saturday 17 May 2025 01:02:46 +0000 (0:00:01.534) 0:00:37.195 ********** 2025-05-17 01:03:27.048102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:27.048113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:27.048123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:27.048130 | orchestrator | 2025-05-17 01:03:27.048136 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-17 01:03:27.048142 | orchestrator | Saturday 17 May 2025 01:02:48 +0000 (0:00:01.496) 0:00:38.691 ********** 2025-05-17 01:03:27.048149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:27.048161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:27.048176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:27.048188 | orchestrator | 2025-05-17 01:03:27.048200 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-17 01:03:27.048212 | orchestrator | Saturday 17 May 2025 01:02:51 +0000 (0:00:03.003) 0:00:41.695 ********** 2025-05-17 01:03:27.048219 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-17 01:03:27.048225 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-17 01:03:27.048231 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-17 01:03:27.048237 | orchestrator | 2025-05-17 01:03:27.048244 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-17 01:03:27.048257 | orchestrator | Saturday 17 May 2025 01:02:53 +0000 (0:00:02.155) 0:00:43.851 ********** 2025-05-17 01:03:27.048264 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:03:27.048270 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:03:27.048276 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:03:27.048282 | orchestrator | 2025-05-17 01:03:27.048288 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-17 01:03:27.048295 | orchestrator | Saturday 17 May 2025 01:02:55 +0000 (0:00:02.234) 0:00:46.085 ********** 2025-05-17 01:03:27.048301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-17 01:03:27.048308 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:03:27.048320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-17 01:03:27.048333 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:03:27.048339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-17 01:03:27.048346 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:03:27.048352 | orchestrator | 2025-05-17 01:03:27.048372 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-17 01:03:27.048379 | orchestrator | Saturday 17 May 2025 01:02:56 +0000 (0:00:01.290) 0:00:47.375 ********** 2025-05-17 01:03:27.048389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:27.048395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:27.048419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:27.048444 | orchestrator | 2025-05-17 01:03:27.048451 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-17 01:03:27.048458 | orchestrator | Saturday 17 May 2025 01:02:58 +0000 (0:00:01.365) 0:00:48.740 ********** 2025-05-17 01:03:27.048464 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:03:27.048470 | orchestrator | 2025-05-17 01:03:27.048476 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-17 01:03:27.048482 | orchestrator | Saturday 17 May 2025 01:03:01 +0000 (0:00:03.038) 0:00:51.779 ********** 2025-05-17 01:03:27.048488 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:03:27.048495 | orchestrator | 2025-05-17 01:03:27.048501 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-17 01:03:27.048507 | orchestrator | Saturday 17 May 2025 01:03:03 +0000 (0:00:02.455) 0:00:54.234 ********** 2025-05-17 01:03:27.048513 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:03:27.048519 | orchestrator | 2025-05-17 01:03:27.048525 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-17 01:03:27.048531 | orchestrator | Saturday 17 May 2025 01:03:15 +0000 (0:00:11.961) 0:01:06.196 ********** 2025-05-17 01:03:27.048537 | orchestrator | 2025-05-17 01:03:27.048544 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-17 01:03:27.048550 | orchestrator | Saturday 17 May 2025 01:03:15 +0000 (0:00:00.116) 0:01:06.312 ********** 2025-05-17 01:03:27.048556 | orchestrator | 2025-05-17 01:03:27.048562 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-17 01:03:27.048568 | orchestrator | Saturday 17 May 2025 01:03:16 +0000 (0:00:00.266) 0:01:06.578 ********** 2025-05-17 01:03:27.048574 | orchestrator | 2025-05-17 01:03:27.048580 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-17 01:03:27.048586 | orchestrator | Saturday 17 May 2025 01:03:16 +0000 (0:00:00.066) 0:01:06.645 ********** 2025-05-17 01:03:27.048593 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:03:27.048599 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:03:27.048605 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:03:27.048611 | orchestrator | 2025-05-17 01:03:27.048617 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:03:27.048623 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-17 01:03:27.048630 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-17 01:03:27.048636 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-17 01:03:27.048642 | orchestrator | 2025-05-17 01:03:27.048649 | orchestrator | 2025-05-17 01:03:27.048655 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:03:27.048664 | orchestrator | Saturday 17 May 2025 01:03:25 +0000 (0:00:09.110) 0:01:15.755 ********** 2025-05-17 01:03:27.048670 | orchestrator | =============================================================================== 2025-05-17 01:03:27.048677 | orchestrator | placement : Running placement bootstrap container ---------------------- 11.96s 2025-05-17 01:03:27.048683 | orchestrator | placement : Restart placement-api container ----------------------------- 9.11s 2025-05-17 01:03:27.048689 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.59s 2025-05-17 01:03:27.048700 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.22s 2025-05-17 01:03:27.048706 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.99s 2025-05-17 01:03:27.048713 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.67s 2025-05-17 01:03:27.048719 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.50s 2025-05-17 01:03:27.048725 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.46s 2025-05-17 01:03:27.048731 | orchestrator | placement : Creating placement databases -------------------------------- 3.04s 2025-05-17 01:03:27.048737 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.00s 2025-05-17 01:03:27.048743 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.46s 2025-05-17 01:03:27.048749 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.24s 2025-05-17 01:03:27.048755 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.19s 2025-05-17 01:03:27.048761 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.16s 2025-05-17 01:03:27.048768 | orchestrator | placement : include_tasks ----------------------------------------------- 1.69s 2025-05-17 01:03:27.048774 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.65s 2025-05-17 01:03:27.048780 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.53s 2025-05-17 01:03:27.048786 | orchestrator | placement : Copying over config.json files for services ----------------- 1.50s 2025-05-17 01:03:27.048809 | orchestrator | placement : Check placement containers ---------------------------------- 1.37s 2025-05-17 01:03:27.048821 | orchestrator | placement : Copying over existing policy file --------------------------- 1.29s 2025-05-17 01:03:27.048831 | orchestrator | 2025-05-17 01:03:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:27.048846 | orchestrator | 2025-05-17 01:03:27 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state STARTED 2025-05-17 01:03:27.048856 | orchestrator | 2025-05-17 01:03:27 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:03:27.048864 | orchestrator | 2025-05-17 01:03:27 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:27.048871 | orchestrator | 2025-05-17 01:03:27 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:27.048877 | orchestrator | 2025-05-17 01:03:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:30.080146 | orchestrator | 2025-05-17 01:03:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:30.080667 | orchestrator | 2025-05-17 01:03:30 | INFO  | Task c97fee5c-1646-49cf-a8cc-e88347445ee6 is in state SUCCESS 2025-05-17 01:03:30.080702 | orchestrator | 2025-05-17 01:03:30.082480 | orchestrator | 2025-05-17 01:03:30.082520 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 01:03:30.082533 | orchestrator | 2025-05-17 01:03:30.082545 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 01:03:30.082557 | orchestrator | Saturday 17 May 2025 01:01:15 +0000 (0:00:00.489) 0:00:00.489 ********** 2025-05-17 01:03:30.082568 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:03:30.082583 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:03:30.082594 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:03:30.082605 | orchestrator | 2025-05-17 01:03:30.082618 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 01:03:30.082629 | orchestrator | Saturday 17 May 2025 01:01:16 +0000 (0:00:00.565) 0:00:01.055 ********** 2025-05-17 01:03:30.082640 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-17 01:03:30.082651 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-17 01:03:30.082685 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-17 01:03:30.082696 | orchestrator | 2025-05-17 01:03:30.082707 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-17 01:03:30.082718 | orchestrator | 2025-05-17 01:03:30.082729 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-17 01:03:30.082857 | orchestrator | Saturday 17 May 2025 01:01:16 +0000 (0:00:00.345) 0:00:01.400 ********** 2025-05-17 01:03:30.083694 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:03:30.083744 | orchestrator | 2025-05-17 01:03:30.083855 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-17 01:03:30.083879 | orchestrator | Saturday 17 May 2025 01:01:17 +0000 (0:00:00.646) 0:00:02.047 ********** 2025-05-17 01:03:30.083900 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-17 01:03:30.083920 | orchestrator | 2025-05-17 01:03:30.083939 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-17 01:03:30.083975 | orchestrator | Saturday 17 May 2025 01:01:20 +0000 (0:00:03.584) 0:00:05.632 ********** 2025-05-17 01:03:30.084012 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-17 01:03:30.084032 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-17 01:03:30.084048 | orchestrator | 2025-05-17 01:03:30.084065 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-17 01:03:30.084082 | orchestrator | Saturday 17 May 2025 01:01:27 +0000 (0:00:06.696) 0:00:12.328 ********** 2025-05-17 01:03:30.084099 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-17 01:03:30.084116 | orchestrator | 2025-05-17 01:03:30.084133 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-17 01:03:30.084151 | orchestrator | Saturday 17 May 2025 01:01:30 +0000 (0:00:03.456) 0:00:15.785 ********** 2025-05-17 01:03:30.084169 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-17 01:03:30.084188 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-17 01:03:30.084207 | orchestrator | 2025-05-17 01:03:30.084225 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-17 01:03:30.084244 | orchestrator | Saturday 17 May 2025 01:01:34 +0000 (0:00:03.675) 0:00:19.460 ********** 2025-05-17 01:03:30.084263 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-17 01:03:30.084282 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-17 01:03:30.084300 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-17 01:03:30.084336 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-17 01:03:30.084359 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-17 01:03:30.084378 | orchestrator | 2025-05-17 01:03:30.084396 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-17 01:03:30.084416 | orchestrator | Saturday 17 May 2025 01:01:51 +0000 (0:00:16.508) 0:00:35.969 ********** 2025-05-17 01:03:30.084435 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-17 01:03:30.084453 | orchestrator | 2025-05-17 01:03:30.084471 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-17 01:03:30.084489 | orchestrator | Saturday 17 May 2025 01:01:55 +0000 (0:00:04.477) 0:00:40.446 ********** 2025-05-17 01:03:30.084512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:30.084699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:30.084742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.084870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.084893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:30.084916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.085031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.085083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.085114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.085133 | orchestrator | 2025-05-17 01:03:30.085151 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-17 01:03:30.085169 | orchestrator | Saturday 17 May 2025 01:01:58 +0000 (0:00:02.947) 0:00:43.393 ********** 2025-05-17 01:03:30.085187 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-17 01:03:30.085206 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-17 01:03:30.085225 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-17 01:03:30.085244 | orchestrator | 2025-05-17 01:03:30.085262 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-17 01:03:30.085279 | orchestrator | Saturday 17 May 2025 01:02:00 +0000 (0:00:01.568) 0:00:44.962 ********** 2025-05-17 01:03:30.085298 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:03:30.085317 | orchestrator | 2025-05-17 01:03:30.085335 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-17 01:03:30.085354 | orchestrator | Saturday 17 May 2025 01:02:00 +0000 (0:00:00.221) 0:00:45.183 ********** 2025-05-17 01:03:30.085372 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:03:30.085390 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:03:30.085409 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:03:30.085426 | orchestrator | 2025-05-17 01:03:30.085444 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-17 01:03:30.085463 | orchestrator | Saturday 17 May 2025 01:02:00 +0000 (0:00:00.505) 0:00:45.688 ********** 2025-05-17 01:03:30.087165 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:03:30.087202 | orchestrator | 2025-05-17 01:03:30.087215 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-17 01:03:30.087281 | orchestrator | Saturday 17 May 2025 01:02:01 +0000 (0:00:00.821) 0:00:46.510 ********** 2025-05-17 01:03:30.087294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:30.087326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:30.087348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:30.087374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.087394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.087404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.087422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.087433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.087455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.087466 | orchestrator | 2025-05-17 01:03:30.087476 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-17 01:03:30.087485 | orchestrator | Saturday 17 May 2025 01:02:05 +0000 (0:00:03.469) 0:00:49.979 ********** 2025-05-17 01:03:30.087495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-17 01:03:30.087512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.087532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.087543 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:03:30.087554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-17 01:03:30.087570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.087581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.087597 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:03:30.087607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-17 01:03:30.087625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.087636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.087646 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:03:30.087656 | orchestrator | 2025-05-17 01:03:30.087666 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-17 01:03:30.087676 | orchestrator | Saturday 17 May 2025 01:02:06 +0000 (0:00:01.635) 0:00:51.615 ********** 2025-05-17 01:03:30.087690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-17 01:03:30.087707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.087717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.087727 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:03:30.087744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-17 01:03:30.087755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.087770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.087780 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:03:30.087814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-17 01:03:30.087834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.087844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.087854 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:03:30.087864 | orchestrator | 2025-05-17 01:03:30.087874 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-17 01:03:30.087901 | orchestrator | Saturday 17 May 2025 01:02:09 +0000 (0:00:02.271) 0:00:53.887 ********** 2025-05-17 01:03:30.087912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:30.087928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:30.087945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:30.087956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.087974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.087985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.087999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.088015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.088025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.088035 | orchestrator | 2025-05-17 01:03:30.088045 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-17 01:03:30.088055 | orchestrator | Saturday 17 May 2025 01:02:12 +0000 (0:00:03.619) 0:00:57.507 ********** 2025-05-17 01:03:30.088065 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:03:30.088074 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:03:30.088084 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:03:30.088094 | orchestrator | 2025-05-17 01:03:30.088114 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-17 01:03:30.088124 | orchestrator | Saturday 17 May 2025 01:02:14 +0000 (0:00:02.147) 0:00:59.655 ********** 2025-05-17 01:03:30.088134 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-17 01:03:30.088144 | orchestrator | 2025-05-17 01:03:30.088154 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-17 01:03:30.088164 | orchestrator | Saturday 17 May 2025 01:02:16 +0000 (0:00:01.474) 0:01:01.130 ********** 2025-05-17 01:03:30.088173 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:03:30.088183 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:03:30.088192 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:03:30.088202 | orchestrator | 2025-05-17 01:03:30.088212 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-17 01:03:30.088221 | orchestrator | Saturday 17 May 2025 01:02:18 +0000 (0:00:01.802) 0:01:02.932 ********** 2025-05-17 01:03:30.088239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:30.088261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:30.088283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:30.088305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.088317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.088333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.088350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.088368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.088378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.088388 | orchestrator | 2025-05-17 01:03:30.088398 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-17 01:03:30.088408 | orchestrator | Saturday 17 May 2025 01:02:29 +0000 (0:00:11.614) 0:01:14.546 ********** 2025-05-17 01:03:30.088418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-17 01:03:30.088435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.088453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.088463 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:03:30.088478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-17 01:03:30.088489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.088500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.088510 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:03:30.088526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-17 01:03:30.088543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.088558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:03:30.088568 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:03:30.088578 | orchestrator | 2025-05-17 01:03:30.088588 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-17 01:03:30.088598 | orchestrator | Saturday 17 May 2025 01:02:31 +0000 (0:00:01.442) 0:01:15.989 ********** 2025-05-17 01:03:30.088608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:30.088618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:30.088636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-17 01:03:30.088653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.088668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.088679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.088689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.088699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.088722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:03:30.088732 | orchestrator | 2025-05-17 01:03:30.088742 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-17 01:03:30.088752 | orchestrator | Saturday 17 May 2025 01:02:34 +0000 (0:00:03.340) 0:01:19.330 ********** 2025-05-17 01:03:30.088762 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:03:30.088772 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:03:30.088782 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:03:30.088856 | orchestrator | 2025-05-17 01:03:30.088869 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-17 01:03:30.088878 | orchestrator | Saturday 17 May 2025 01:02:35 +0000 (0:00:00.746) 0:01:20.076 ********** 2025-05-17 01:03:30.088888 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:03:30.088898 | orchestrator | 2025-05-17 01:03:30.088908 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-17 01:03:30.088917 | orchestrator | Saturday 17 May 2025 01:02:37 +0000 (0:00:02.505) 0:01:22.582 ********** 2025-05-17 01:03:30.088927 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:03:30.088936 | orchestrator | 2025-05-17 01:03:30.088946 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-17 01:03:30.088955 | orchestrator | Saturday 17 May 2025 01:02:40 +0000 (0:00:02.528) 0:01:25.111 ********** 2025-05-17 01:03:30.088965 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:03:30.088975 | orchestrator | 2025-05-17 01:03:30.088984 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-17 01:03:30.088997 | orchestrator | Saturday 17 May 2025 01:02:50 +0000 (0:00:10.399) 0:01:35.510 ********** 2025-05-17 01:03:30.089005 | orchestrator | 2025-05-17 01:03:30.089013 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-17 01:03:30.089021 | orchestrator | Saturday 17 May 2025 01:02:50 +0000 (0:00:00.043) 0:01:35.554 ********** 2025-05-17 01:03:30.089029 | orchestrator | 2025-05-17 01:03:30.089036 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-17 01:03:30.089044 | orchestrator | Saturday 17 May 2025 01:02:50 +0000 (0:00:00.112) 0:01:35.666 ********** 2025-05-17 01:03:30.089052 | orchestrator | 2025-05-17 01:03:30.089060 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-17 01:03:30.089068 | orchestrator | Saturday 17 May 2025 01:02:50 +0000 (0:00:00.042) 0:01:35.709 ********** 2025-05-17 01:03:30.089076 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:03:30.089084 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:03:30.089092 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:03:30.089099 | orchestrator | 2025-05-17 01:03:30.089107 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-17 01:03:30.089115 | orchestrator | Saturday 17 May 2025 01:03:03 +0000 (0:00:12.552) 0:01:48.261 ********** 2025-05-17 01:03:30.089123 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:03:30.089131 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:03:30.089139 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:03:30.089147 | orchestrator | 2025-05-17 01:03:30.089155 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-17 01:03:30.089168 | orchestrator | Saturday 17 May 2025 01:03:14 +0000 (0:00:10.701) 0:01:58.963 ********** 2025-05-17 01:03:30.089176 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:03:30.089184 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:03:30.089192 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:03:30.089200 | orchestrator | 2025-05-17 01:03:30.089208 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:03:30.089216 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-17 01:03:30.089225 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-17 01:03:30.089233 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-17 01:03:30.089241 | orchestrator | 2025-05-17 01:03:30.089249 | orchestrator | 2025-05-17 01:03:30.089257 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:03:30.089265 | orchestrator | Saturday 17 May 2025 01:03:28 +0000 (0:00:14.166) 0:02:13.130 ********** 2025-05-17 01:03:30.089273 | orchestrator | =============================================================================== 2025-05-17 01:03:30.089281 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.51s 2025-05-17 01:03:30.089288 | orchestrator | barbican : Restart barbican-worker container --------------------------- 14.17s 2025-05-17 01:03:30.089308 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.55s 2025-05-17 01:03:30.089316 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.61s 2025-05-17 01:03:30.089324 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.70s 2025-05-17 01:03:30.089332 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.40s 2025-05-17 01:03:30.089340 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.70s 2025-05-17 01:03:30.089353 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.48s 2025-05-17 01:03:30.089362 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.68s 2025-05-17 01:03:30.089370 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.62s 2025-05-17 01:03:30.089378 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.58s 2025-05-17 01:03:30.089385 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.47s 2025-05-17 01:03:30.089393 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.46s 2025-05-17 01:03:30.089401 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.34s 2025-05-17 01:03:30.089409 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.95s 2025-05-17 01:03:30.089417 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.53s 2025-05-17 01:03:30.089425 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.51s 2025-05-17 01:03:30.089433 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.27s 2025-05-17 01:03:30.089441 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.15s 2025-05-17 01:03:30.089449 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 1.80s 2025-05-17 01:03:30.089457 | orchestrator | 2025-05-17 01:03:30 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:03:30.089465 | orchestrator | 2025-05-17 01:03:30 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:30.089473 | orchestrator | 2025-05-17 01:03:30 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:30.089481 | orchestrator | 2025-05-17 01:03:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:33.109672 | orchestrator | 2025-05-17 01:03:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:33.109849 | orchestrator | 2025-05-17 01:03:33 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:03:33.109882 | orchestrator | 2025-05-17 01:03:33 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:33.110586 | orchestrator | 2025-05-17 01:03:33 | INFO  | Task 508e2f79-ce12-4202-acb0-528d1a3e1f94 is in state STARTED 2025-05-17 01:03:33.111185 | orchestrator | 2025-05-17 01:03:33 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:33.111237 | orchestrator | 2025-05-17 01:03:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:36.135220 | orchestrator | 2025-05-17 01:03:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:36.135526 | orchestrator | 2025-05-17 01:03:36 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:03:36.136474 | orchestrator | 2025-05-17 01:03:36 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:36.137210 | orchestrator | 2025-05-17 01:03:36 | INFO  | Task 508e2f79-ce12-4202-acb0-528d1a3e1f94 is in state SUCCESS 2025-05-17 01:03:36.137976 | orchestrator | 2025-05-17 01:03:36 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:36.138150 | orchestrator | 2025-05-17 01:03:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:39.168437 | orchestrator | 2025-05-17 01:03:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:39.168943 | orchestrator | 2025-05-17 01:03:39 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:03:39.169589 | orchestrator | 2025-05-17 01:03:39 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:03:39.170330 | orchestrator | 2025-05-17 01:03:39 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:39.171541 | orchestrator | 2025-05-17 01:03:39 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:39.171835 | orchestrator | 2025-05-17 01:03:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:42.195494 | orchestrator | 2025-05-17 01:03:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:42.195640 | orchestrator | 2025-05-17 01:03:42 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:03:42.195987 | orchestrator | 2025-05-17 01:03:42 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:03:42.198316 | orchestrator | 2025-05-17 01:03:42 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:42.204120 | orchestrator | 2025-05-17 01:03:42 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:42.204198 | orchestrator | 2025-05-17 01:03:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:45.233993 | orchestrator | 2025-05-17 01:03:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:45.234235 | orchestrator | 2025-05-17 01:03:45 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:03:45.236510 | orchestrator | 2025-05-17 01:03:45 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:03:45.236545 | orchestrator | 2025-05-17 01:03:45 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:45.236586 | orchestrator | 2025-05-17 01:03:45 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:45.236599 | orchestrator | 2025-05-17 01:03:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:48.258739 | orchestrator | 2025-05-17 01:03:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:48.258909 | orchestrator | 2025-05-17 01:03:48 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:03:48.259167 | orchestrator | 2025-05-17 01:03:48 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:03:48.259596 | orchestrator | 2025-05-17 01:03:48 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:48.260272 | orchestrator | 2025-05-17 01:03:48 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:48.260312 | orchestrator | 2025-05-17 01:03:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:51.282952 | orchestrator | 2025-05-17 01:03:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:51.284319 | orchestrator | 2025-05-17 01:03:51 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:03:51.284350 | orchestrator | 2025-05-17 01:03:51 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:03:51.284362 | orchestrator | 2025-05-17 01:03:51 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:51.284374 | orchestrator | 2025-05-17 01:03:51 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:51.284385 | orchestrator | 2025-05-17 01:03:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:54.311883 | orchestrator | 2025-05-17 01:03:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:54.312200 | orchestrator | 2025-05-17 01:03:54 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:03:54.313470 | orchestrator | 2025-05-17 01:03:54 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:03:54.313833 | orchestrator | 2025-05-17 01:03:54 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:54.315315 | orchestrator | 2025-05-17 01:03:54 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:54.315387 | orchestrator | 2025-05-17 01:03:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:03:57.355902 | orchestrator | 2025-05-17 01:03:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:03:57.356026 | orchestrator | 2025-05-17 01:03:57 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:03:57.356043 | orchestrator | 2025-05-17 01:03:57 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:03:57.356056 | orchestrator | 2025-05-17 01:03:57 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:03:57.356067 | orchestrator | 2025-05-17 01:03:57 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:03:57.356079 | orchestrator | 2025-05-17 01:03:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:00.387141 | orchestrator | 2025-05-17 01:04:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:00.389300 | orchestrator | 2025-05-17 01:04:00 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:00.389375 | orchestrator | 2025-05-17 01:04:00 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:00.389386 | orchestrator | 2025-05-17 01:04:00 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:04:00.389411 | orchestrator | 2025-05-17 01:04:00 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:00.389431 | orchestrator | 2025-05-17 01:04:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:03.420486 | orchestrator | 2025-05-17 01:04:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:03.420617 | orchestrator | 2025-05-17 01:04:03 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:03.422235 | orchestrator | 2025-05-17 01:04:03 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:03.422267 | orchestrator | 2025-05-17 01:04:03 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:04:03.423967 | orchestrator | 2025-05-17 01:04:03 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:03.423981 | orchestrator | 2025-05-17 01:04:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:06.464731 | orchestrator | 2025-05-17 01:04:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:06.465400 | orchestrator | 2025-05-17 01:04:06 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:06.470600 | orchestrator | 2025-05-17 01:04:06 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:06.470633 | orchestrator | 2025-05-17 01:04:06 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:04:06.470659 | orchestrator | 2025-05-17 01:04:06 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:06.470667 | orchestrator | 2025-05-17 01:04:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:09.515880 | orchestrator | 2025-05-17 01:04:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:09.516001 | orchestrator | 2025-05-17 01:04:09 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:09.516018 | orchestrator | 2025-05-17 01:04:09 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:09.516031 | orchestrator | 2025-05-17 01:04:09 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:04:09.516042 | orchestrator | 2025-05-17 01:04:09 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:09.516054 | orchestrator | 2025-05-17 01:04:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:12.558311 | orchestrator | 2025-05-17 01:04:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:12.558621 | orchestrator | 2025-05-17 01:04:12 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:12.560979 | orchestrator | 2025-05-17 01:04:12 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:12.561631 | orchestrator | 2025-05-17 01:04:12 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:04:12.561724 | orchestrator | 2025-05-17 01:04:12 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:12.561736 | orchestrator | 2025-05-17 01:04:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:15.584614 | orchestrator | 2025-05-17 01:04:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:15.585651 | orchestrator | 2025-05-17 01:04:15 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:15.585703 | orchestrator | 2025-05-17 01:04:15 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:15.586119 | orchestrator | 2025-05-17 01:04:15 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:04:15.587386 | orchestrator | 2025-05-17 01:04:15 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:15.587412 | orchestrator | 2025-05-17 01:04:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:18.605939 | orchestrator | 2025-05-17 01:04:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:18.606111 | orchestrator | 2025-05-17 01:04:18 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:18.606139 | orchestrator | 2025-05-17 01:04:18 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:18.606271 | orchestrator | 2025-05-17 01:04:18 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state STARTED 2025-05-17 01:04:18.606896 | orchestrator | 2025-05-17 01:04:18 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:18.606923 | orchestrator | 2025-05-17 01:04:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:21.643609 | orchestrator | 2025-05-17 01:04:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:21.643725 | orchestrator | 2025-05-17 01:04:21 | INFO  | Task c0e83c81-ce0f-425e-8ed5-5f5d2f8981f3 is in state STARTED 2025-05-17 01:04:21.644320 | orchestrator | 2025-05-17 01:04:21 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:21.644648 | orchestrator | 2025-05-17 01:04:21 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:21.645679 | orchestrator | 2025-05-17 01:04:21 | INFO  | Task 63711f50-efa8-4e9d-9084-38e47c250144 is in state SUCCESS 2025-05-17 01:04:21.645705 | orchestrator | 2025-05-17 01:04:21.645719 | orchestrator | 2025-05-17 01:04:21.645731 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 01:04:21.645744 | orchestrator | 2025-05-17 01:04:21.645755 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 01:04:21.645793 | orchestrator | Saturday 17 May 2025 01:03:33 +0000 (0:00:00.270) 0:00:00.270 ********** 2025-05-17 01:04:21.645807 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:04:21.645820 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:04:21.645831 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:04:21.645843 | orchestrator | 2025-05-17 01:04:21.645854 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 01:04:21.645865 | orchestrator | Saturday 17 May 2025 01:03:33 +0000 (0:00:00.407) 0:00:00.677 ********** 2025-05-17 01:04:21.645895 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-17 01:04:21.645907 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-17 01:04:21.645918 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-17 01:04:21.645928 | orchestrator | 2025-05-17 01:04:21.645939 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-17 01:04:21.645950 | orchestrator | 2025-05-17 01:04:21.645961 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-17 01:04:21.645972 | orchestrator | Saturday 17 May 2025 01:03:34 +0000 (0:00:00.593) 0:00:01.271 ********** 2025-05-17 01:04:21.645983 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:04:21.645994 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:04:21.646005 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:04:21.646252 | orchestrator | 2025-05-17 01:04:21.646274 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:04:21.646286 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:04:21.646299 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:04:21.646311 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:04:21.646322 | orchestrator | 2025-05-17 01:04:21.646333 | orchestrator | 2025-05-17 01:04:21.646344 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:04:21.646355 | orchestrator | Saturday 17 May 2025 01:03:35 +0000 (0:00:00.669) 0:00:01.941 ********** 2025-05-17 01:04:21.646366 | orchestrator | =============================================================================== 2025-05-17 01:04:21.646377 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.67s 2025-05-17 01:04:21.646388 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2025-05-17 01:04:21.646399 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2025-05-17 01:04:21.646410 | orchestrator | 2025-05-17 01:04:21.647351 | orchestrator | 2025-05-17 01:04:21.647383 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 01:04:21.647394 | orchestrator | 2025-05-17 01:04:21.647406 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 01:04:21.647416 | orchestrator | Saturday 17 May 2025 01:01:14 +0000 (0:00:00.404) 0:00:00.404 ********** 2025-05-17 01:04:21.647427 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:04:21.647439 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:04:21.647450 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:04:21.647461 | orchestrator | 2025-05-17 01:04:21.647472 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 01:04:21.647483 | orchestrator | Saturday 17 May 2025 01:01:14 +0000 (0:00:00.444) 0:00:00.849 ********** 2025-05-17 01:04:21.647494 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-17 01:04:21.647505 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-17 01:04:21.647516 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-17 01:04:21.647528 | orchestrator | 2025-05-17 01:04:21.647539 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-17 01:04:21.647550 | orchestrator | 2025-05-17 01:04:21.647561 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-17 01:04:21.647572 | orchestrator | Saturday 17 May 2025 01:01:15 +0000 (0:00:00.543) 0:00:01.393 ********** 2025-05-17 01:04:21.647583 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:04:21.647594 | orchestrator | 2025-05-17 01:04:21.647606 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-17 01:04:21.647618 | orchestrator | Saturday 17 May 2025 01:01:16 +0000 (0:00:01.076) 0:00:02.469 ********** 2025-05-17 01:04:21.647629 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-17 01:04:21.647640 | orchestrator | 2025-05-17 01:04:21.647651 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-17 01:04:21.647662 | orchestrator | Saturday 17 May 2025 01:01:20 +0000 (0:00:03.601) 0:00:06.070 ********** 2025-05-17 01:04:21.647673 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-17 01:04:21.647684 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-17 01:04:21.647695 | orchestrator | 2025-05-17 01:04:21.647707 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-17 01:04:21.647718 | orchestrator | Saturday 17 May 2025 01:01:26 +0000 (0:00:06.244) 0:00:12.315 ********** 2025-05-17 01:04:21.647743 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-17 01:04:21.647754 | orchestrator | 2025-05-17 01:04:21.647800 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-17 01:04:21.647813 | orchestrator | Saturday 17 May 2025 01:01:29 +0000 (0:00:03.364) 0:00:15.679 ********** 2025-05-17 01:04:21.647824 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-17 01:04:21.647836 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-17 01:04:21.647847 | orchestrator | 2025-05-17 01:04:21.647858 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-17 01:04:21.647868 | orchestrator | Saturday 17 May 2025 01:01:33 +0000 (0:00:04.032) 0:00:19.712 ********** 2025-05-17 01:04:21.647879 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-17 01:04:21.647891 | orchestrator | 2025-05-17 01:04:21.647902 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-17 01:04:21.647913 | orchestrator | Saturday 17 May 2025 01:01:36 +0000 (0:00:03.150) 0:00:22.863 ********** 2025-05-17 01:04:21.647924 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-17 01:04:21.647936 | orchestrator | 2025-05-17 01:04:21.647960 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-17 01:04:21.647974 | orchestrator | Saturday 17 May 2025 01:01:41 +0000 (0:00:04.152) 0:00:27.016 ********** 2025-05-17 01:04:21.647991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 01:04:21.648025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 01:04:21.648040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 01:04:21.648062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.648083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.648098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.648112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.648726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.648742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.648763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.648799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.648817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.648830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.648841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.648881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.648895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.648914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.648925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.648942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.648954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.649049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.649065 | orchestrator | 2025-05-17 01:04:21.649077 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-17 01:04:21.649089 | orchestrator | Saturday 17 May 2025 01:01:44 +0000 (0:00:03.159) 0:00:30.175 ********** 2025-05-17 01:04:21.649100 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:04:21.649112 | orchestrator | 2025-05-17 01:04:21.649123 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-17 01:04:21.649133 | orchestrator | Saturday 17 May 2025 01:01:44 +0000 (0:00:00.111) 0:00:30.286 ********** 2025-05-17 01:04:21.649152 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:04:21.649164 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:04:21.649305 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:04:21.649318 | orchestrator | 2025-05-17 01:04:21.649331 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-17 01:04:21.649343 | orchestrator | Saturday 17 May 2025 01:01:44 +0000 (0:00:00.381) 0:00:30.668 ********** 2025-05-17 01:04:21.649789 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:04:21.649803 | orchestrator | 2025-05-17 01:04:21.649814 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-17 01:04:21.649825 | orchestrator | Saturday 17 May 2025 01:01:45 +0000 (0:00:00.585) 0:00:31.254 ********** 2025-05-17 01:04:21.649837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 01:04:21.649856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 01:04:21.649869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 01:04:21.649965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.649994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.650006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.650061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.650083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.650096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.650108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.650159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.650188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.650200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.650212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.650229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.650241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.650253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.650302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.650316 | orchestrator | 2025-05-17 01:04:21.650328 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-17 01:04:21.650340 | orchestrator | Saturday 17 May 2025 01:01:51 +0000 (0:00:06.573) 0:00:37.827 ********** 2025-05-17 01:04:21.650351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 01:04:21.650363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-17 01:04:21.650380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.650392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.650404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.650453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.650467 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:04:21.650480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 01:04:21.650492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-17 01:04:21.650509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.650521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.650539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.650581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.650597 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:04:21.650612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 01:04:21.650627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-17 01:04:21.650646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.650660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.650678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.650722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.650736 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:04:21.650747 | orchestrator | 2025-05-17 01:04:21.650759 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-17 01:04:21.650792 | orchestrator | Saturday 17 May 2025 01:01:54 +0000 (0:00:02.371) 0:00:40.198 ********** 2025-05-17 01:04:21.650804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 01:04:21.650816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-17 01:04:21.650833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.650845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.650864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.650909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.650922 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:04:21.650933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 01:04:21.650945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-17 01:04:21.650956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.650976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.651102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.651169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.651182 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:04:21.651194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 01:04:21.651206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-17 01:04:21.651218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.651244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.651256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.651298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.651312 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:04:21.651323 | orchestrator | 2025-05-17 01:04:21.651334 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-17 01:04:21.651345 | orchestrator | Saturday 17 May 2025 01:01:56 +0000 (0:00:01.775) 0:00:41.974 ********** 2025-05-17 01:04:21.651356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 01:04:21.651368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 01:04:21.651392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 01:04:21.651404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.651716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.651741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.651809 | orchestrator | 2025-05-17 01:04:21.651822 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-17 01:04:21.651833 | orchestrator | Saturday 17 May 2025 01:02:02 +0000 (0:00:06.637) 0:00:48.611 ********** 2025-05-17 01:04:21.651850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 01:04:21.651862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 01:04:21.651908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 01:04:21.651922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.651981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.652165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.652211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.652234 | orchestrator | 2025-05-17 01:04:21.652245 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-17 01:04:21.652256 | orchestrator | Saturday 17 May 2025 01:02:25 +0000 (0:00:22.783) 0:01:11.394 ********** 2025-05-17 01:04:21.652267 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-17 01:04:21.652278 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-17 01:04:21.652289 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-17 01:04:21.652300 | orchestrator | 2025-05-17 01:04:21.652310 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-17 01:04:21.652321 | orchestrator | Saturday 17 May 2025 01:02:33 +0000 (0:00:07.676) 0:01:19.071 ********** 2025-05-17 01:04:21.652332 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-17 01:04:21.652348 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-17 01:04:21.652360 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-17 01:04:21.652371 | orchestrator | 2025-05-17 01:04:21.652382 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-17 01:04:21.652392 | orchestrator | Saturday 17 May 2025 01:02:38 +0000 (0:00:05.175) 0:01:24.246 ********** 2025-05-17 01:04:21.652404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 01:04:21.652423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 01:04:21.652440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 01:04:21.652452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.652485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.652505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.652517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.652545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.652557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.652624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.652657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.652669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.652686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.652709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.652762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.652829 | orchestrator | 2025-05-17 01:04:21.652848 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-17 01:04:21.652866 | orchestrator | Saturday 17 May 2025 01:02:41 +0000 (0:00:03.472) 0:01:27.719 ********** 2025-05-17 01:04:21.652894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 01:04:21.652914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 01:04:21.652936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 01:04:21.652957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.652981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.652998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.653009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.653150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.653173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.653201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653219 | orchestrator | 2025-05-17 01:04:21.653231 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-17 01:04:21.653242 | orchestrator | Saturday 17 May 2025 01:02:44 +0000 (0:00:03.152) 0:01:30.871 ********** 2025-05-17 01:04:21.653253 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:04:21.653264 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:04:21.653275 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:04:21.653286 | orchestrator | 2025-05-17 01:04:21.653297 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-17 01:04:21.653307 | orchestrator | Saturday 17 May 2025 01:02:45 +0000 (0:00:00.793) 0:01:31.665 ********** 2025-05-17 01:04:21.653325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 01:04:21.653337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-17 01:04:21.653349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653426 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:04:21.653437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 01:04:21.653449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-17 01:04:21.653460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653537 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:04:21.653548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-17 01:04:21.653560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-17 01:04:21.653576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653648 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:04:21.653659 | orchestrator | 2025-05-17 01:04:21.653670 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-17 01:04:21.653681 | orchestrator | Saturday 17 May 2025 01:02:47 +0000 (0:00:01.363) 0:01:33.028 ********** 2025-05-17 01:04:21.653693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 01:04:21.653710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 01:04:21.653728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-17 01:04:21.653747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.653759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.653789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-17 01:04:21.653802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.653825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.653838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.653855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.653874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.653896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.653916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.653935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.653953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.653965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.653984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.653996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.654008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.654091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-17 01:04:21.654123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-17 01:04:21.654136 | orchestrator | 2025-05-17 01:04:21.654147 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-17 01:04:21.654158 | orchestrator | Saturday 17 May 2025 01:02:52 +0000 (0:00:05.078) 0:01:38.106 ********** 2025-05-17 01:04:21.654169 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:04:21.654180 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:04:21.654192 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:04:21.654202 | orchestrator | 2025-05-17 01:04:21.654213 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-17 01:04:21.654224 | orchestrator | Saturday 17 May 2025 01:02:53 +0000 (0:00:00.783) 0:01:38.892 ********** 2025-05-17 01:04:21.654235 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-17 01:04:21.654246 | orchestrator | 2025-05-17 01:04:21.654256 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-17 01:04:21.654267 | orchestrator | Saturday 17 May 2025 01:02:55 +0000 (0:00:02.195) 0:01:41.088 ********** 2025-05-17 01:04:21.654278 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-17 01:04:21.654289 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-17 01:04:21.654300 | orchestrator | 2025-05-17 01:04:21.654310 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-17 01:04:21.654321 | orchestrator | Saturday 17 May 2025 01:02:57 +0000 (0:00:02.153) 0:01:43.241 ********** 2025-05-17 01:04:21.654332 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:04:21.654342 | orchestrator | 2025-05-17 01:04:21.654353 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-17 01:04:21.654363 | orchestrator | Saturday 17 May 2025 01:03:11 +0000 (0:00:14.553) 0:01:57.795 ********** 2025-05-17 01:04:21.654374 | orchestrator | 2025-05-17 01:04:21.654385 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-17 01:04:21.654396 | orchestrator | Saturday 17 May 2025 01:03:11 +0000 (0:00:00.050) 0:01:57.846 ********** 2025-05-17 01:04:21.654406 | orchestrator | 2025-05-17 01:04:21.654417 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-17 01:04:21.654436 | orchestrator | Saturday 17 May 2025 01:03:12 +0000 (0:00:00.048) 0:01:57.895 ********** 2025-05-17 01:04:21.654447 | orchestrator | 2025-05-17 01:04:21.654458 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-17 01:04:21.654469 | orchestrator | Saturday 17 May 2025 01:03:12 +0000 (0:00:00.052) 0:01:57.947 ********** 2025-05-17 01:04:21.654480 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:04:21.654491 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:04:21.654501 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:04:21.654512 | orchestrator | 2025-05-17 01:04:21.654523 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-17 01:04:21.654534 | orchestrator | Saturday 17 May 2025 01:03:27 +0000 (0:00:15.631) 0:02:13.579 ********** 2025-05-17 01:04:21.654545 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:04:21.654555 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:04:21.654573 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:04:21.654584 | orchestrator | 2025-05-17 01:04:21.654595 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-17 01:04:21.654606 | orchestrator | Saturday 17 May 2025 01:03:35 +0000 (0:00:07.607) 0:02:21.186 ********** 2025-05-17 01:04:21.654617 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:04:21.654628 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:04:21.654639 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:04:21.654650 | orchestrator | 2025-05-17 01:04:21.654660 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-17 01:04:21.654671 | orchestrator | Saturday 17 May 2025 01:03:47 +0000 (0:00:12.005) 0:02:33.191 ********** 2025-05-17 01:04:21.654683 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:04:21.654694 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:04:21.654704 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:04:21.654715 | orchestrator | 2025-05-17 01:04:21.654726 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-17 01:04:21.654737 | orchestrator | Saturday 17 May 2025 01:03:54 +0000 (0:00:07.320) 0:02:40.512 ********** 2025-05-17 01:04:21.654748 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:04:21.654759 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:04:21.654827 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:04:21.654839 | orchestrator | 2025-05-17 01:04:21.654850 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-17 01:04:21.654861 | orchestrator | Saturday 17 May 2025 01:04:01 +0000 (0:00:06.399) 0:02:46.911 ********** 2025-05-17 01:04:21.654871 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:04:21.654882 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:04:21.654892 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:04:21.654903 | orchestrator | 2025-05-17 01:04:21.654913 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-17 01:04:21.654924 | orchestrator | Saturday 17 May 2025 01:04:13 +0000 (0:00:12.194) 0:02:59.106 ********** 2025-05-17 01:04:21.654935 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:04:21.654946 | orchestrator | 2025-05-17 01:04:21.654956 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:04:21.654968 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-17 01:04:21.654980 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-17 01:04:21.654991 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-17 01:04:21.655001 | orchestrator | 2025-05-17 01:04:21.655012 | orchestrator | 2025-05-17 01:04:21.655023 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:04:21.655034 | orchestrator | Saturday 17 May 2025 01:04:18 +0000 (0:00:05.250) 0:03:04.357 ********** 2025-05-17 01:04:21.655050 | orchestrator | =============================================================================== 2025-05-17 01:04:21.655062 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.78s 2025-05-17 01:04:21.655073 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 15.63s 2025-05-17 01:04:21.655084 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.55s 2025-05-17 01:04:21.655095 | orchestrator | designate : Restart designate-worker container ------------------------- 12.19s 2025-05-17 01:04:21.655105 | orchestrator | designate : Restart designate-central container ------------------------ 12.01s 2025-05-17 01:04:21.655116 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.68s 2025-05-17 01:04:21.655127 | orchestrator | designate : Restart designate-api container ----------------------------- 7.61s 2025-05-17 01:04:21.655138 | orchestrator | designate : Restart designate-producer container ------------------------ 7.32s 2025-05-17 01:04:21.655156 | orchestrator | designate : Copying over config.json files for services ----------------- 6.64s 2025-05-17 01:04:21.655167 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.57s 2025-05-17 01:04:21.655177 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.40s 2025-05-17 01:04:21.655188 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.24s 2025-05-17 01:04:21.655199 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 5.25s 2025-05-17 01:04:21.655210 | orchestrator | designate : Copying over named.conf ------------------------------------- 5.18s 2025-05-17 01:04:21.655221 | orchestrator | designate : Check designate containers ---------------------------------- 5.08s 2025-05-17 01:04:21.655231 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.15s 2025-05-17 01:04:21.655242 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.03s 2025-05-17 01:04:21.655260 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.60s 2025-05-17 01:04:21.655271 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.47s 2025-05-17 01:04:21.655282 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.36s 2025-05-17 01:04:21.655293 | orchestrator | 2025-05-17 01:04:21 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:21.655304 | orchestrator | 2025-05-17 01:04:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:24.676387 | orchestrator | 2025-05-17 01:04:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:24.676504 | orchestrator | 2025-05-17 01:04:24 | INFO  | Task c0e83c81-ce0f-425e-8ed5-5f5d2f8981f3 is in state STARTED 2025-05-17 01:04:24.676852 | orchestrator | 2025-05-17 01:04:24 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:24.677116 | orchestrator | 2025-05-17 01:04:24 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:24.677797 | orchestrator | 2025-05-17 01:04:24 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:24.677881 | orchestrator | 2025-05-17 01:04:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:27.712307 | orchestrator | 2025-05-17 01:04:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:27.712556 | orchestrator | 2025-05-17 01:04:27 | INFO  | Task c0e83c81-ce0f-425e-8ed5-5f5d2f8981f3 is in state STARTED 2025-05-17 01:04:27.712987 | orchestrator | 2025-05-17 01:04:27 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:27.714302 | orchestrator | 2025-05-17 01:04:27 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:27.714477 | orchestrator | 2025-05-17 01:04:27 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:27.714504 | orchestrator | 2025-05-17 01:04:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:30.762364 | orchestrator | 2025-05-17 01:04:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:30.763312 | orchestrator | 2025-05-17 01:04:30 | INFO  | Task c0e83c81-ce0f-425e-8ed5-5f5d2f8981f3 is in state STARTED 2025-05-17 01:04:30.764563 | orchestrator | 2025-05-17 01:04:30 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:30.766189 | orchestrator | 2025-05-17 01:04:30 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:30.767510 | orchestrator | 2025-05-17 01:04:30 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:30.767584 | orchestrator | 2025-05-17 01:04:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:33.818938 | orchestrator | 2025-05-17 01:04:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:33.820898 | orchestrator | 2025-05-17 01:04:33 | INFO  | Task c0e83c81-ce0f-425e-8ed5-5f5d2f8981f3 is in state STARTED 2025-05-17 01:04:33.822994 | orchestrator | 2025-05-17 01:04:33 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:33.825593 | orchestrator | 2025-05-17 01:04:33 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:33.825613 | orchestrator | 2025-05-17 01:04:33 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:33.825624 | orchestrator | 2025-05-17 01:04:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:36.879847 | orchestrator | 2025-05-17 01:04:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:36.881417 | orchestrator | 2025-05-17 01:04:36 | INFO  | Task c0e83c81-ce0f-425e-8ed5-5f5d2f8981f3 is in state STARTED 2025-05-17 01:04:36.883151 | orchestrator | 2025-05-17 01:04:36 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:36.885632 | orchestrator | 2025-05-17 01:04:36 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:36.886160 | orchestrator | 2025-05-17 01:04:36 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:36.886324 | orchestrator | 2025-05-17 01:04:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:39.936110 | orchestrator | 2025-05-17 01:04:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:39.936213 | orchestrator | 2025-05-17 01:04:39 | INFO  | Task c0e83c81-ce0f-425e-8ed5-5f5d2f8981f3 is in state STARTED 2025-05-17 01:04:39.936947 | orchestrator | 2025-05-17 01:04:39 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:39.937854 | orchestrator | 2025-05-17 01:04:39 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:39.938746 | orchestrator | 2025-05-17 01:04:39 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:39.939586 | orchestrator | 2025-05-17 01:04:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:42.990301 | orchestrator | 2025-05-17 01:04:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:42.990726 | orchestrator | 2025-05-17 01:04:42 | INFO  | Task c0e83c81-ce0f-425e-8ed5-5f5d2f8981f3 is in state STARTED 2025-05-17 01:04:42.993466 | orchestrator | 2025-05-17 01:04:42 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:42.994358 | orchestrator | 2025-05-17 01:04:42 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:42.995340 | orchestrator | 2025-05-17 01:04:42 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:42.995366 | orchestrator | 2025-05-17 01:04:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:46.021292 | orchestrator | 2025-05-17 01:04:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:46.021519 | orchestrator | 2025-05-17 01:04:46 | INFO  | Task c0e83c81-ce0f-425e-8ed5-5f5d2f8981f3 is in state STARTED 2025-05-17 01:04:46.022360 | orchestrator | 2025-05-17 01:04:46 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:46.022864 | orchestrator | 2025-05-17 01:04:46 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:46.023661 | orchestrator | 2025-05-17 01:04:46 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:46.023702 | orchestrator | 2025-05-17 01:04:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:49.060640 | orchestrator | 2025-05-17 01:04:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:49.062433 | orchestrator | 2025-05-17 01:04:49 | INFO  | Task c0e83c81-ce0f-425e-8ed5-5f5d2f8981f3 is in state STARTED 2025-05-17 01:04:49.066248 | orchestrator | 2025-05-17 01:04:49 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:49.067849 | orchestrator | 2025-05-17 01:04:49 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:49.070114 | orchestrator | 2025-05-17 01:04:49 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:49.070189 | orchestrator | 2025-05-17 01:04:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:52.121483 | orchestrator | 2025-05-17 01:04:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:52.122930 | orchestrator | 2025-05-17 01:04:52 | INFO  | Task c0e83c81-ce0f-425e-8ed5-5f5d2f8981f3 is in state STARTED 2025-05-17 01:04:52.126660 | orchestrator | 2025-05-17 01:04:52 | INFO  | Task abdc1210-36d3-4ad0-8f73-0ae995cc4aad is in state STARTED 2025-05-17 01:04:52.128426 | orchestrator | 2025-05-17 01:04:52 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:52.130001 | orchestrator | 2025-05-17 01:04:52 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:52.131309 | orchestrator | 2025-05-17 01:04:52 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:52.131337 | orchestrator | 2025-05-17 01:04:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:55.174461 | orchestrator | 2025-05-17 01:04:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:55.175485 | orchestrator | 2025-05-17 01:04:55 | INFO  | Task c0e83c81-ce0f-425e-8ed5-5f5d2f8981f3 is in state STARTED 2025-05-17 01:04:55.175535 | orchestrator | 2025-05-17 01:04:55 | INFO  | Task abdc1210-36d3-4ad0-8f73-0ae995cc4aad is in state STARTED 2025-05-17 01:04:55.176945 | orchestrator | 2025-05-17 01:04:55 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:55.179995 | orchestrator | 2025-05-17 01:04:55 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:55.180472 | orchestrator | 2025-05-17 01:04:55 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:55.180498 | orchestrator | 2025-05-17 01:04:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:04:58.212287 | orchestrator | 2025-05-17 01:04:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:04:58.212398 | orchestrator | 2025-05-17 01:04:58 | INFO  | Task c0e83c81-ce0f-425e-8ed5-5f5d2f8981f3 is in state SUCCESS 2025-05-17 01:04:58.212413 | orchestrator | 2025-05-17 01:04:58 | INFO  | Task abdc1210-36d3-4ad0-8f73-0ae995cc4aad is in state STARTED 2025-05-17 01:04:58.212609 | orchestrator | 2025-05-17 01:04:58 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:04:58.213241 | orchestrator | 2025-05-17 01:04:58 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:04:58.213485 | orchestrator | 2025-05-17 01:04:58 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:04:58.213996 | orchestrator | 2025-05-17 01:04:58 | INFO  | Task 008c9236-22a2-4c72-8287-88bdd3d40d43 is in state STARTED 2025-05-17 01:04:58.214068 | orchestrator | 2025-05-17 01:04:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:01.235055 | orchestrator | 2025-05-17 01:05:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:01.235195 | orchestrator | 2025-05-17 01:05:01 | INFO  | Task abdc1210-36d3-4ad0-8f73-0ae995cc4aad is in state SUCCESS 2025-05-17 01:05:01.235909 | orchestrator | 2025-05-17 01:05:01 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:01.236122 | orchestrator | 2025-05-17 01:05:01 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:05:01.236556 | orchestrator | 2025-05-17 01:05:01 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:01.237229 | orchestrator | 2025-05-17 01:05:01 | INFO  | Task 008c9236-22a2-4c72-8287-88bdd3d40d43 is in state SUCCESS 2025-05-17 01:05:01.237249 | orchestrator | 2025-05-17 01:05:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:04.275460 | orchestrator | 2025-05-17 01:05:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:04.277856 | orchestrator | 2025-05-17 01:05:04 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:04.282150 | orchestrator | 2025-05-17 01:05:04 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:04.284237 | orchestrator | 2025-05-17 01:05:04 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:05:04.286646 | orchestrator | 2025-05-17 01:05:04 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:04.286742 | orchestrator | 2025-05-17 01:05:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:07.316718 | orchestrator | 2025-05-17 01:05:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:07.317705 | orchestrator | 2025-05-17 01:05:07 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:07.319377 | orchestrator | 2025-05-17 01:05:07 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:07.320893 | orchestrator | 2025-05-17 01:05:07 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:05:07.322184 | orchestrator | 2025-05-17 01:05:07 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:07.322235 | orchestrator | 2025-05-17 01:05:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:10.363367 | orchestrator | 2025-05-17 01:05:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:10.363472 | orchestrator | 2025-05-17 01:05:10 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:10.364258 | orchestrator | 2025-05-17 01:05:10 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:10.366588 | orchestrator | 2025-05-17 01:05:10 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:05:10.367114 | orchestrator | 2025-05-17 01:05:10 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:10.367148 | orchestrator | 2025-05-17 01:05:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:13.403407 | orchestrator | 2025-05-17 01:05:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:13.403506 | orchestrator | 2025-05-17 01:05:13 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:13.404664 | orchestrator | 2025-05-17 01:05:13 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:13.405408 | orchestrator | 2025-05-17 01:05:13 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:05:13.406465 | orchestrator | 2025-05-17 01:05:13 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:13.406482 | orchestrator | 2025-05-17 01:05:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:16.435446 | orchestrator | 2025-05-17 01:05:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:16.435557 | orchestrator | 2025-05-17 01:05:16 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:16.436050 | orchestrator | 2025-05-17 01:05:16 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:16.436569 | orchestrator | 2025-05-17 01:05:16 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:05:16.437153 | orchestrator | 2025-05-17 01:05:16 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:16.437175 | orchestrator | 2025-05-17 01:05:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:19.467152 | orchestrator | 2025-05-17 01:05:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:19.467375 | orchestrator | 2025-05-17 01:05:19 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:19.468226 | orchestrator | 2025-05-17 01:05:19 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:19.468717 | orchestrator | 2025-05-17 01:05:19 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:05:19.469618 | orchestrator | 2025-05-17 01:05:19 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:19.469724 | orchestrator | 2025-05-17 01:05:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:22.507274 | orchestrator | 2025-05-17 01:05:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:22.509203 | orchestrator | 2025-05-17 01:05:22 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:22.511731 | orchestrator | 2025-05-17 01:05:22 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:22.514953 | orchestrator | 2025-05-17 01:05:22 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:05:22.517670 | orchestrator | 2025-05-17 01:05:22 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:22.517711 | orchestrator | 2025-05-17 01:05:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:25.566565 | orchestrator | 2025-05-17 01:05:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:25.567137 | orchestrator | 2025-05-17 01:05:25 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:25.567881 | orchestrator | 2025-05-17 01:05:25 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:25.568784 | orchestrator | 2025-05-17 01:05:25 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:05:25.572456 | orchestrator | 2025-05-17 01:05:25 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:25.572537 | orchestrator | 2025-05-17 01:05:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:28.620439 | orchestrator | 2025-05-17 01:05:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:28.620533 | orchestrator | 2025-05-17 01:05:28 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:28.622200 | orchestrator | 2025-05-17 01:05:28 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:28.622846 | orchestrator | 2025-05-17 01:05:28 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state STARTED 2025-05-17 01:05:28.623904 | orchestrator | 2025-05-17 01:05:28 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:28.623948 | orchestrator | 2025-05-17 01:05:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:31.659558 | orchestrator | 2025-05-17 01:05:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:31.660602 | orchestrator | 2025-05-17 01:05:31 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:31.662289 | orchestrator | 2025-05-17 01:05:31 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:31.665392 | orchestrator | 2025-05-17 01:05:31.665442 | orchestrator | 2025-05-17 01:05:31.665452 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 01:05:31.665459 | orchestrator | 2025-05-17 01:05:31.665466 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 01:05:31.665473 | orchestrator | Saturday 17 May 2025 01:04:23 +0000 (0:00:00.297) 0:00:00.297 ********** 2025-05-17 01:05:31.665480 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:05:31.665488 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:05:31.665495 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:05:31.665501 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:05:31.665508 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:05:31.665514 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:05:31.665521 | orchestrator | ok: [testbed-manager] 2025-05-17 01:05:31.665527 | orchestrator | 2025-05-17 01:05:31.665534 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 01:05:31.665540 | orchestrator | Saturday 17 May 2025 01:04:24 +0000 (0:00:01.169) 0:00:01.469 ********** 2025-05-17 01:05:31.665547 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-17 01:05:31.665553 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-17 01:05:31.665559 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-17 01:05:31.665566 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-17 01:05:31.665572 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-17 01:05:31.665578 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-17 01:05:31.665585 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-17 01:05:31.665591 | orchestrator | 2025-05-17 01:05:31.665597 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-17 01:05:31.665603 | orchestrator | 2025-05-17 01:05:31.665610 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-17 01:05:31.665616 | orchestrator | Saturday 17 May 2025 01:04:25 +0000 (0:00:00.619) 0:00:02.089 ********** 2025-05-17 01:05:31.665623 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2025-05-17 01:05:31.665631 | orchestrator | 2025-05-17 01:05:31.665637 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-17 01:05:31.665644 | orchestrator | Saturday 17 May 2025 01:04:26 +0000 (0:00:01.118) 0:00:03.207 ********** 2025-05-17 01:05:31.665670 | orchestrator | changed: [testbed-node-3] => (item=swift (object-store)) 2025-05-17 01:05:31.665677 | orchestrator | 2025-05-17 01:05:31.665683 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-17 01:05:31.665689 | orchestrator | Saturday 17 May 2025 01:04:30 +0000 (0:00:03.549) 0:00:06.757 ********** 2025-05-17 01:05:31.665699 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-17 01:05:31.665711 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-17 01:05:31.665727 | orchestrator | 2025-05-17 01:05:31.665757 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-17 01:05:31.665763 | orchestrator | Saturday 17 May 2025 01:04:36 +0000 (0:00:06.459) 0:00:13.217 ********** 2025-05-17 01:05:31.666118 | orchestrator | ok: [testbed-node-3] => (item=service) 2025-05-17 01:05:31.666147 | orchestrator | 2025-05-17 01:05:31.666157 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-17 01:05:31.666171 | orchestrator | Saturday 17 May 2025 01:04:40 +0000 (0:00:03.536) 0:00:16.753 ********** 2025-05-17 01:05:31.666181 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-17 01:05:31.666191 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service) 2025-05-17 01:05:31.666202 | orchestrator | 2025-05-17 01:05:31.666212 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-17 01:05:31.666222 | orchestrator | Saturday 17 May 2025 01:04:44 +0000 (0:00:03.922) 0:00:20.675 ********** 2025-05-17 01:05:31.666233 | orchestrator | ok: [testbed-node-3] => (item=admin) 2025-05-17 01:05:31.666243 | orchestrator | changed: [testbed-node-3] => (item=ResellerAdmin) 2025-05-17 01:05:31.666254 | orchestrator | 2025-05-17 01:05:31.666264 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-17 01:05:31.666274 | orchestrator | Saturday 17 May 2025 01:04:50 +0000 (0:00:06.205) 0:00:26.880 ********** 2025-05-17 01:05:31.666284 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service -> admin) 2025-05-17 01:05:31.666294 | orchestrator | 2025-05-17 01:05:31.666305 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:05:31.666315 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:05:31.666325 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:05:31.666336 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:05:31.666347 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:05:31.666358 | orchestrator | testbed-node-3 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:05:31.666382 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:05:31.666394 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:05:31.666404 | orchestrator | 2025-05-17 01:05:31.666415 | orchestrator | 2025-05-17 01:05:31.666425 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:05:31.666436 | orchestrator | Saturday 17 May 2025 01:04:55 +0000 (0:00:05.623) 0:00:32.504 ********** 2025-05-17 01:05:31.666446 | orchestrator | =============================================================================== 2025-05-17 01:05:31.666456 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.46s 2025-05-17 01:05:31.666466 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.21s 2025-05-17 01:05:31.666490 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.62s 2025-05-17 01:05:31.666502 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.92s 2025-05-17 01:05:31.666512 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.55s 2025-05-17 01:05:31.666523 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.54s 2025-05-17 01:05:31.666533 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.17s 2025-05-17 01:05:31.666544 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.12s 2025-05-17 01:05:31.666554 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2025-05-17 01:05:31.666565 | orchestrator | 2025-05-17 01:05:31.666575 | orchestrator | None 2025-05-17 01:05:31.666587 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2025-05-17 01:05:31.666597 | orchestrator | -vvvv to see details 2025-05-17 01:05:31.666609 | orchestrator | 2025-05-17 01:05:31.666619 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 01:05:31.666629 | orchestrator | 2025-05-17 01:05:31.666638 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 01:05:31.666646 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:05:31.666657 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:05:31.666668 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:05:31.666678 | orchestrator | 2025-05-17 01:05:31.666689 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 01:05:31.666699 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-17 01:05:31.666709 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-17 01:05:31.666720 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-17 01:05:31.666730 | orchestrator | 2025-05-17 01:05:31.666763 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-17 01:05:31.666774 | orchestrator | 2025-05-17 01:05:31.666784 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-17 01:05:31.666795 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:05:31.666807 | orchestrator | 2025-05-17 01:05:31.666817 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-17 01:05:31.666840 | orchestrator | failed: [testbed-node-0] (item=glance (image)) => {"ansible_loop_var": "item", "item": {"description": "Openstack Image", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9292"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9292"}], "name": "glance", "type": "image"}, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true} 2025-05-17 01:05:31.666855 | orchestrator | fatal: [testbed-node-0]: UNREACHABLE! => {"changed": false, "msg": "All items completed", "results": [{"ansible_loop_var": "item", "item": {"description": "Openstack Image", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9292"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9292"}], "name": "glance", "type": "image"}, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true}], "unreachable": true} 2025-05-17 01:05:31.666867 | orchestrator | 2025-05-17 01:05:31.666877 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:05:31.666888 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:05:31.666907 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:05:31.666917 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:05:31.666926 | orchestrator | 2025-05-17 01:05:31.666937 | orchestrator | 2025-05-17 01:05:31.666946 | orchestrator | 2025-05-17 01:05:31.666956 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 01:05:31.666966 | orchestrator | 2025-05-17 01:05:31.666977 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 01:05:31.666997 | orchestrator | Saturday 17 May 2025 01:03:31 +0000 (0:00:00.783) 0:00:00.783 ********** 2025-05-17 01:05:31.667008 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:05:31.667019 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:05:31.667030 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:05:31.667041 | orchestrator | 2025-05-17 01:05:31.667051 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 01:05:31.667062 | orchestrator | Saturday 17 May 2025 01:03:32 +0000 (0:00:00.616) 0:00:01.400 ********** 2025-05-17 01:05:31.667072 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-17 01:05:31.667083 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-17 01:05:31.667093 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-17 01:05:31.667104 | orchestrator | 2025-05-17 01:05:31.667115 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-17 01:05:31.667121 | orchestrator | 2025-05-17 01:05:31.667127 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-17 01:05:31.667133 | orchestrator | Saturday 17 May 2025 01:03:32 +0000 (0:00:00.261) 0:00:01.661 ********** 2025-05-17 01:05:31.667140 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:05:31.667146 | orchestrator | 2025-05-17 01:05:31.667152 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-17 01:05:31.667158 | orchestrator | Saturday 17 May 2025 01:03:32 +0000 (0:00:00.613) 0:00:02.275 ********** 2025-05-17 01:05:31.667164 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-17 01:05:31.667170 | orchestrator | 2025-05-17 01:05:31.667176 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-17 01:05:31.667182 | orchestrator | Saturday 17 May 2025 01:03:36 +0000 (0:00:03.612) 0:00:05.888 ********** 2025-05-17 01:05:31.667188 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-17 01:05:31.667194 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-17 01:05:31.667200 | orchestrator | 2025-05-17 01:05:31.667207 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-17 01:05:31.667213 | orchestrator | Saturday 17 May 2025 01:03:43 +0000 (0:00:06.510) 0:00:12.399 ********** 2025-05-17 01:05:31.667219 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-17 01:05:31.667225 | orchestrator | 2025-05-17 01:05:31.667231 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-17 01:05:31.667237 | orchestrator | Saturday 17 May 2025 01:03:46 +0000 (0:00:03.557) 0:00:15.957 ********** 2025-05-17 01:05:31.667243 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-17 01:05:31.667249 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-17 01:05:31.667255 | orchestrator | 2025-05-17 01:05:31.667264 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-17 01:05:31.667275 | orchestrator | Saturday 17 May 2025 01:03:50 +0000 (0:00:03.832) 0:00:19.789 ********** 2025-05-17 01:05:31.667286 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-17 01:05:31.667296 | orchestrator | 2025-05-17 01:05:31.667306 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-17 01:05:31.667329 | orchestrator | Saturday 17 May 2025 01:03:53 +0000 (0:00:03.211) 0:00:23.001 ********** 2025-05-17 01:05:31.667340 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-17 01:05:31.667351 | orchestrator | 2025-05-17 01:05:31.667358 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-17 01:05:31.667364 | orchestrator | Saturday 17 May 2025 01:03:57 +0000 (0:00:04.227) 0:00:27.228 ********** 2025-05-17 01:05:31.667370 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:05:31.667376 | orchestrator | 2025-05-17 01:05:31.667382 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-17 01:05:31.667388 | orchestrator | Saturday 17 May 2025 01:04:01 +0000 (0:00:03.519) 0:00:30.748 ********** 2025-05-17 01:05:31.667395 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:05:31.667406 | orchestrator | 2025-05-17 01:05:31.667416 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-17 01:05:31.667427 | orchestrator | Saturday 17 May 2025 01:04:05 +0000 (0:00:03.909) 0:00:34.658 ********** 2025-05-17 01:05:31.667437 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:05:31.667447 | orchestrator | 2025-05-17 01:05:31.667458 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-17 01:05:31.667469 | orchestrator | Saturday 17 May 2025 01:04:08 +0000 (0:00:03.542) 0:00:38.200 ********** 2025-05-17 01:05:31.667488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 01:05:31.667500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 01:05:31.667512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:05:31.667535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:05:31.667548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 01:05:31.667559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:05:31.667570 | orchestrator | 2025-05-17 01:05:31.667586 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-17 01:05:31.667598 | orchestrator | Saturday 17 May 2025 01:04:11 +0000 (0:00:03.040) 0:00:41.241 ********** 2025-05-17 01:05:31.667609 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:05:31.667619 | orchestrator | 2025-05-17 01:05:31.667629 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-17 01:05:31.667640 | orchestrator | Saturday 17 May 2025 01:04:12 +0000 (0:00:00.245) 0:00:41.487 ********** 2025-05-17 01:05:31.667651 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:05:31.667661 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:05:31.667671 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:05:31.667682 | orchestrator | 2025-05-17 01:05:31.667691 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-17 01:05:31.667701 | orchestrator | Saturday 17 May 2025 01:04:13 +0000 (0:00:01.051) 0:00:42.538 ********** 2025-05-17 01:05:31.667711 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-17 01:05:31.667720 | orchestrator | 2025-05-17 01:05:31.667730 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-17 01:05:31.667769 | orchestrator | Saturday 17 May 2025 01:04:14 +0000 (0:00:00.824) 0:00:43.363 ********** 2025-05-17 01:05:31.667781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-17 01:05:31.667806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:05:31.667819 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:05:31.667830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-17 01:05:31.667849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:05:31.667860 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:05:31.667871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-17 01:05:31.667887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:05:31.667899 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:05:31.667910 | orchestrator | 2025-05-17 01:05:31.667921 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-17 01:05:31.667931 | orchestrator | Saturday 17 May 2025 01:04:15 +0000 (0:00:01.574) 0:00:44.937 ********** 2025-05-17 01:05:31.667942 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:05:31.667953 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:05:31.667963 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:05:31.667973 | orchestrator | 2025-05-17 01:05:31.667983 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-17 01:05:31.667994 | orchestrator | Saturday 17 May 2025 01:04:16 +0000 (0:00:00.402) 0:00:45.340 ********** 2025-05-17 01:05:31.668008 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:05:31.668019 | orchestrator | 2025-05-17 01:05:31.668029 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-17 01:05:31.668040 | orchestrator | Saturday 17 May 2025 01:04:17 +0000 (0:00:01.228) 0:00:46.569 ********** 2025-05-17 01:05:31.668051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 01:05:31.668070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 01:05:31.668083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 01:05:31.668096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:05:31.668111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:05:31.668118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:05:31.668124 | orchestrator | 2025-05-17 01:05:31.668131 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-17 01:05:31.668137 | orchestrator | Saturday 17 May 2025 01:04:20 +0000 (0:00:03.133) 0:00:49.702 ********** 2025-05-17 01:05:31.668149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-17 01:05:31.668160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-17 01:05:31.668167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:05:31.668177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:05:31.668183 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:05:31.668191 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:05:31.668203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-17 01:05:31.668255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', '2025-05-17 01:05:31 | INFO  | Task 8dc38f49-04b8-4c6e-b36b-91872a0594a2 is in state SUCCESS 2025-05-17 01:05:31.668279 | orchestrator | https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:05:31.668290 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:05:31.668300 | orchestrator | 2025-05-17 01:05:31.668311 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-17 01:05:31.668322 | orchestrator | Saturday 17 May 2025 01:04:22 +0000 (0:00:01.718) 0:00:51.421 ********** 2025-05-17 01:05:31.668333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-17 01:05:31.668349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:05:31.668359 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:05:31.668370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-17 01:05:31.668388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:05:31.668406 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:05:31.668417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-17 01:05:31.668428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:05:31.668438 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:05:31.668449 | orchestrator | 2025-05-17 01:05:31.668459 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-17 01:05:31.668469 | orchestrator | Saturday 17 May 2025 01:04:23 +0000 (0:00:01.762) 0:00:53.184 ********** 2025-05-17 01:05:31.668483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 01:05:31.668493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 01:05:31.668517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 01:05:31.668529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:05:31.668540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:05:31.668556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:05:31.668567 | orchestrator | 2025-05-17 01:05:31.668578 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-17 01:05:31.668589 | orchestrator | Saturday 17 May 2025 01:04:26 +0000 (0:00:02.599) 0:00:55.783 ********** 2025-05-17 01:05:31.668599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 01:05:31.668626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 01:05:31.668639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 01:05:31.668654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:05:31.668665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:05:31.668683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:05:31.668694 | orchestrator | 2025-05-17 01:05:31.668705 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-17 01:05:31.668715 | orchestrator | Saturday 17 May 2025 01:04:31 +0000 (0:00:04.795) 0:01:00.579 ********** 2025-05-17 01:05:31.668788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-17 01:05:31.668804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:05:31.668815 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:05:31.668832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-17 01:05:31.668843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:05:31.668862 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:05:31.668881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-17 01:05:31.668893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:05:31.668904 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:05:31.668914 | orchestrator | 2025-05-17 01:05:31.668925 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-17 01:05:31.668936 | orchestrator | Saturday 17 May 2025 01:04:32 +0000 (0:00:00.761) 0:01:01.340 ********** 2025-05-17 01:05:31.668946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 01:05:31.668962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 01:05:31.668984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-17 01:05:31.669002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:05:31.669011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:05:31.669018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:05:31.669024 | orchestrator | 2025-05-17 01:05:31.669030 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-17 01:05:31.669036 | orchestrator | Saturday 17 May 2025 01:04:34 +0000 (0:00:02.225) 0:01:03.566 ********** 2025-05-17 01:05:31.669043 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:05:31.669049 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:05:31.669055 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:05:31.669066 | orchestrator | 2025-05-17 01:05:31.669075 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-17 01:05:31.669082 | orchestrator | Saturday 17 May 2025 01:04:34 +0000 (0:00:00.256) 0:01:03.823 ********** 2025-05-17 01:05:31.669088 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:05:31.669094 | orchestrator | 2025-05-17 01:05:31.669100 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-17 01:05:31.669106 | orchestrator | Saturday 17 May 2025 01:04:37 +0000 (0:00:02.498) 0:01:06.321 ********** 2025-05-17 01:05:31.669112 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:05:31.669118 | orchestrator | 2025-05-17 01:05:31.669124 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-17 01:05:31.669130 | orchestrator | Saturday 17 May 2025 01:04:39 +0000 (0:00:02.379) 0:01:08.700 ********** 2025-05-17 01:05:31.669137 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:05:31.669143 | orchestrator | 2025-05-17 01:05:31.669149 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-17 01:05:31.669155 | orchestrator | Saturday 17 May 2025 01:04:54 +0000 (0:00:14.648) 0:01:23.348 ********** 2025-05-17 01:05:31.669162 | orchestrator | 2025-05-17 01:05:31.669168 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-17 01:05:31.669174 | orchestrator | Saturday 17 May 2025 01:04:54 +0000 (0:00:00.055) 0:01:23.404 ********** 2025-05-17 01:05:31.669180 | orchestrator | 2025-05-17 01:05:31.669186 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-17 01:05:31.669192 | orchestrator | Saturday 17 May 2025 01:04:54 +0000 (0:00:00.143) 0:01:23.548 ********** 2025-05-17 01:05:31.669198 | orchestrator | 2025-05-17 01:05:31.669204 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-17 01:05:31.669210 | orchestrator | Saturday 17 May 2025 01:04:54 +0000 (0:00:00.053) 0:01:23.601 ********** 2025-05-17 01:05:31.669216 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:05:31.669222 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:05:31.669228 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:05:31.669235 | orchestrator | 2025-05-17 01:05:31.669241 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-17 01:05:31.669247 | orchestrator | Saturday 17 May 2025 01:05:13 +0000 (0:00:19.624) 0:01:43.226 ********** 2025-05-17 01:05:31.669253 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:05:31.669259 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:05:31.669265 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:05:31.669271 | orchestrator | 2025-05-17 01:05:31.669277 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:05:31.669288 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-17 01:05:31.669296 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-17 01:05:31.669302 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-17 01:05:31.669308 | orchestrator | 2025-05-17 01:05:31.669314 | orchestrator | 2025-05-17 01:05:31.669321 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:05:31.669327 | orchestrator | Saturday 17 May 2025 01:05:30 +0000 (0:00:17.047) 0:02:00.274 ********** 2025-05-17 01:05:31.669333 | orchestrator | =============================================================================== 2025-05-17 01:05:31.669339 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 19.62s 2025-05-17 01:05:31.669345 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 17.05s 2025-05-17 01:05:31.669351 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.65s 2025-05-17 01:05:31.669357 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.51s 2025-05-17 01:05:31.669368 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.80s 2025-05-17 01:05:31.669374 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.23s 2025-05-17 01:05:31.669380 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.91s 2025-05-17 01:05:31.669386 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.83s 2025-05-17 01:05:31.669392 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.61s 2025-05-17 01:05:31.669398 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.56s 2025-05-17 01:05:31.669403 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.54s 2025-05-17 01:05:31.669408 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.52s 2025-05-17 01:05:31.669414 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.21s 2025-05-17 01:05:31.669419 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.13s 2025-05-17 01:05:31.669424 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 3.04s 2025-05-17 01:05:31.669430 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.60s 2025-05-17 01:05:31.669435 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.50s 2025-05-17 01:05:31.669440 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.38s 2025-05-17 01:05:31.669445 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.23s 2025-05-17 01:05:31.669451 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.76s 2025-05-17 01:05:31.669459 | orchestrator | 2025-05-17 01:05:31 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:31.669465 | orchestrator | 2025-05-17 01:05:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:34.713518 | orchestrator | 2025-05-17 01:05:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:34.713616 | orchestrator | 2025-05-17 01:05:34 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:34.713626 | orchestrator | 2025-05-17 01:05:34 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:34.713633 | orchestrator | 2025-05-17 01:05:34 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:05:34.713720 | orchestrator | 2025-05-17 01:05:34 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:34.713728 | orchestrator | 2025-05-17 01:05:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:37.750666 | orchestrator | 2025-05-17 01:05:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:37.753380 | orchestrator | 2025-05-17 01:05:37 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:37.753430 | orchestrator | 2025-05-17 01:05:37 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:37.753441 | orchestrator | 2025-05-17 01:05:37 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:05:37.753986 | orchestrator | 2025-05-17 01:05:37 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:37.754112 | orchestrator | 2025-05-17 01:05:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:40.793876 | orchestrator | 2025-05-17 01:05:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:40.793984 | orchestrator | 2025-05-17 01:05:40 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:40.794500 | orchestrator | 2025-05-17 01:05:40 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:40.797500 | orchestrator | 2025-05-17 01:05:40 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:05:40.797551 | orchestrator | 2025-05-17 01:05:40 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:40.797560 | orchestrator | 2025-05-17 01:05:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:43.838505 | orchestrator | 2025-05-17 01:05:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:43.841162 | orchestrator | 2025-05-17 01:05:43 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:43.842448 | orchestrator | 2025-05-17 01:05:43 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:43.844685 | orchestrator | 2025-05-17 01:05:43 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:05:43.844707 | orchestrator | 2025-05-17 01:05:43 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:43.844995 | orchestrator | 2025-05-17 01:05:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:46.906225 | orchestrator | 2025-05-17 01:05:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:46.907786 | orchestrator | 2025-05-17 01:05:46 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:46.909944 | orchestrator | 2025-05-17 01:05:46 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:46.911463 | orchestrator | 2025-05-17 01:05:46 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:05:46.912563 | orchestrator | 2025-05-17 01:05:46 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:46.912629 | orchestrator | 2025-05-17 01:05:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:49.977942 | orchestrator | 2025-05-17 01:05:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:49.982847 | orchestrator | 2025-05-17 01:05:49 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:49.984134 | orchestrator | 2025-05-17 01:05:49 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:49.987532 | orchestrator | 2025-05-17 01:05:49 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:05:49.990078 | orchestrator | 2025-05-17 01:05:49 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:49.990149 | orchestrator | 2025-05-17 01:05:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:53.030270 | orchestrator | 2025-05-17 01:05:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:53.030930 | orchestrator | 2025-05-17 01:05:53 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:53.031418 | orchestrator | 2025-05-17 01:05:53 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:53.033653 | orchestrator | 2025-05-17 01:05:53 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:05:53.033686 | orchestrator | 2025-05-17 01:05:53 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:53.033697 | orchestrator | 2025-05-17 01:05:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:56.079395 | orchestrator | 2025-05-17 01:05:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:56.080549 | orchestrator | 2025-05-17 01:05:56 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:56.082280 | orchestrator | 2025-05-17 01:05:56 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:56.083514 | orchestrator | 2025-05-17 01:05:56 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:05:56.086176 | orchestrator | 2025-05-17 01:05:56 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:56.086302 | orchestrator | 2025-05-17 01:05:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:05:59.127770 | orchestrator | 2025-05-17 01:05:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:05:59.128771 | orchestrator | 2025-05-17 01:05:59 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:05:59.130969 | orchestrator | 2025-05-17 01:05:59 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:05:59.131021 | orchestrator | 2025-05-17 01:05:59 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:05:59.132360 | orchestrator | 2025-05-17 01:05:59 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:05:59.132438 | orchestrator | 2025-05-17 01:05:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:02.186400 | orchestrator | 2025-05-17 01:06:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:02.186503 | orchestrator | 2025-05-17 01:06:02 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:02.186518 | orchestrator | 2025-05-17 01:06:02 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:02.186951 | orchestrator | 2025-05-17 01:06:02 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:02.191366 | orchestrator | 2025-05-17 01:06:02 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state STARTED 2025-05-17 01:06:02.191431 | orchestrator | 2025-05-17 01:06:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:05.227878 | orchestrator | 2025-05-17 01:06:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:05.227994 | orchestrator | 2025-05-17 01:06:05 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:05.229212 | orchestrator | 2025-05-17 01:06:05 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:05.231498 | orchestrator | 2025-05-17 01:06:05 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:05.231883 | orchestrator | 2025-05-17 01:06:05 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:05.238192 | orchestrator | 2025-05-17 01:06:05 | INFO  | Task 0ff336ca-5a63-4a58-9cc9-99411a525f5e is in state SUCCESS 2025-05-17 01:06:05.239856 | orchestrator | 2025-05-17 01:06:05.239932 | orchestrator | 2025-05-17 01:06:05.239941 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 01:06:05.239946 | orchestrator | 2025-05-17 01:06:05.239951 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 01:06:05.239956 | orchestrator | Saturday 17 May 2025 01:01:14 +0000 (0:00:00.496) 0:00:00.496 ********** 2025-05-17 01:06:05.239960 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:06:05.239966 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:06:05.239970 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:06:05.239974 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:06:05.239996 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:06:05.240011 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:06:05.240016 | orchestrator | 2025-05-17 01:06:05.240019 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 01:06:05.240023 | orchestrator | Saturday 17 May 2025 01:01:15 +0000 (0:00:00.989) 0:00:01.486 ********** 2025-05-17 01:06:05.240028 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-17 01:06:05.240032 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-17 01:06:05.240036 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-17 01:06:05.240040 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-17 01:06:05.240044 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-17 01:06:05.240047 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-17 01:06:05.240051 | orchestrator | 2025-05-17 01:06:05.240055 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-17 01:06:05.240058 | orchestrator | 2025-05-17 01:06:05.240062 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-17 01:06:05.240066 | orchestrator | Saturday 17 May 2025 01:01:16 +0000 (0:00:00.932) 0:00:02.418 ********** 2025-05-17 01:06:05.240071 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 01:06:05.240076 | orchestrator | 2025-05-17 01:06:05.240080 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-17 01:06:05.240084 | orchestrator | Saturday 17 May 2025 01:01:17 +0000 (0:00:01.297) 0:00:03.716 ********** 2025-05-17 01:06:05.240087 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:06:05.240091 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:06:05.240095 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:06:05.240099 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:06:05.240102 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:06:05.240182 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:06:05.240187 | orchestrator | 2025-05-17 01:06:05.240191 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-17 01:06:05.240195 | orchestrator | Saturday 17 May 2025 01:01:18 +0000 (0:00:01.217) 0:00:04.934 ********** 2025-05-17 01:06:05.240199 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:06:05.240202 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:06:05.240206 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:06:05.240210 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:06:05.240214 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:06:05.240218 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:06:05.240222 | orchestrator | 2025-05-17 01:06:05.240225 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-17 01:06:05.240230 | orchestrator | Saturday 17 May 2025 01:01:19 +0000 (0:00:01.038) 0:00:05.972 ********** 2025-05-17 01:06:05.240234 | orchestrator | ok: [testbed-node-0] => { 2025-05-17 01:06:05.240238 | orchestrator |  "changed": false, 2025-05-17 01:06:05.240242 | orchestrator |  "msg": "All assertions passed" 2025-05-17 01:06:05.240247 | orchestrator | } 2025-05-17 01:06:05.240251 | orchestrator | ok: [testbed-node-1] => { 2025-05-17 01:06:05.240255 | orchestrator |  "changed": false, 2025-05-17 01:06:05.240258 | orchestrator |  "msg": "All assertions passed" 2025-05-17 01:06:05.240262 | orchestrator | } 2025-05-17 01:06:05.240266 | orchestrator | ok: [testbed-node-2] => { 2025-05-17 01:06:05.240270 | orchestrator |  "changed": false, 2025-05-17 01:06:05.240273 | orchestrator |  "msg": "All assertions passed" 2025-05-17 01:06:05.240277 | orchestrator | } 2025-05-17 01:06:05.240281 | orchestrator | ok: [testbed-node-3] => { 2025-05-17 01:06:05.240285 | orchestrator |  "changed": false, 2025-05-17 01:06:05.240288 | orchestrator |  "msg": "All assertions passed" 2025-05-17 01:06:05.240292 | orchestrator | } 2025-05-17 01:06:05.240296 | orchestrator | ok: [testbed-node-4] => { 2025-05-17 01:06:05.240300 | orchestrator |  "changed": false, 2025-05-17 01:06:05.240308 | orchestrator |  "msg": "All assertions passed" 2025-05-17 01:06:05.240312 | orchestrator | } 2025-05-17 01:06:05.240316 | orchestrator | ok: [testbed-node-5] => { 2025-05-17 01:06:05.240319 | orchestrator |  "changed": false, 2025-05-17 01:06:05.240323 | orchestrator |  "msg": "All assertions passed" 2025-05-17 01:06:05.240327 | orchestrator | } 2025-05-17 01:06:05.240331 | orchestrator | 2025-05-17 01:06:05.240335 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-17 01:06:05.240338 | orchestrator | Saturday 17 May 2025 01:01:20 +0000 (0:00:00.627) 0:00:06.600 ********** 2025-05-17 01:06:05.240342 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.240346 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.240350 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.240354 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.240357 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.240361 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.240365 | orchestrator | 2025-05-17 01:06:05.240369 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-17 01:06:05.240372 | orchestrator | Saturday 17 May 2025 01:01:21 +0000 (0:00:00.641) 0:00:07.242 ********** 2025-05-17 01:06:05.240376 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-17 01:06:05.240380 | orchestrator | 2025-05-17 01:06:05.240384 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-17 01:06:05.240387 | orchestrator | Saturday 17 May 2025 01:01:24 +0000 (0:00:03.563) 0:00:10.805 ********** 2025-05-17 01:06:05.240391 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-17 01:06:05.240396 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-17 01:06:05.240399 | orchestrator | 2025-05-17 01:06:05.240411 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-17 01:06:05.240415 | orchestrator | Saturday 17 May 2025 01:01:31 +0000 (0:00:06.298) 0:00:17.104 ********** 2025-05-17 01:06:05.240419 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-17 01:06:05.240423 | orchestrator | 2025-05-17 01:06:05.240427 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-17 01:06:05.240431 | orchestrator | Saturday 17 May 2025 01:01:34 +0000 (0:00:03.182) 0:00:20.286 ********** 2025-05-17 01:06:05.240434 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-17 01:06:05.240442 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-17 01:06:05.240446 | orchestrator | 2025-05-17 01:06:05.240450 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-17 01:06:05.240454 | orchestrator | Saturday 17 May 2025 01:01:37 +0000 (0:00:03.668) 0:00:23.955 ********** 2025-05-17 01:06:05.240458 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-17 01:06:05.240462 | orchestrator | 2025-05-17 01:06:05.240466 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-17 01:06:05.240469 | orchestrator | Saturday 17 May 2025 01:01:41 +0000 (0:00:03.165) 0:00:27.120 ********** 2025-05-17 01:06:05.240473 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-17 01:06:05.240477 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-17 01:06:05.240481 | orchestrator | 2025-05-17 01:06:05.240699 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-17 01:06:05.240706 | orchestrator | Saturday 17 May 2025 01:01:49 +0000 (0:00:08.267) 0:00:35.387 ********** 2025-05-17 01:06:05.240711 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.240755 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.240760 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.240765 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.240770 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.240775 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.240785 | orchestrator | 2025-05-17 01:06:05.240789 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-17 01:06:05.240794 | orchestrator | Saturday 17 May 2025 01:01:50 +0000 (0:00:00.737) 0:00:36.125 ********** 2025-05-17 01:06:05.240799 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.240939 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.240944 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.240948 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.240952 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.240956 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.240960 | orchestrator | 2025-05-17 01:06:05.240963 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-17 01:06:05.240967 | orchestrator | Saturday 17 May 2025 01:01:54 +0000 (0:00:04.338) 0:00:40.464 ********** 2025-05-17 01:06:05.240971 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:06:05.240975 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:06:05.240979 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:06:05.240983 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:06:05.240986 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:06:05.240990 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:06:05.240994 | orchestrator | 2025-05-17 01:06:05.240998 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-17 01:06:05.241002 | orchestrator | Saturday 17 May 2025 01:01:56 +0000 (0:00:01.581) 0:00:42.046 ********** 2025-05-17 01:06:05.241005 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.241009 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.241013 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.241017 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.241021 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.241025 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.241028 | orchestrator | 2025-05-17 01:06:05.241032 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-17 01:06:05.241036 | orchestrator | Saturday 17 May 2025 01:01:59 +0000 (0:00:03.502) 0:00:45.548 ********** 2025-05-17 01:06:05.241043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.241064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.241093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.241144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.241188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.241192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.241249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.241269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.241283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.241302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.241334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.241454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.241480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.241493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.241497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.241521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241530 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.241601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.241625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.241657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.241693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.241701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.241751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.241757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241764 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-17 01:06:05.241770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.241790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.241823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.241827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.241835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241858 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.241873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-17 01:06:05.241897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.241934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241949 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.241954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.241958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241965 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-17 01:06:05.241969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.241982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.241986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.241990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.241997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.242001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.242005 | orchestrator | 2025-05-17 01:06:05.242009 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-17 01:06:05.242013 | orchestrator | Saturday 17 May 2025 01:02:02 +0000 (0:00:02.883) 0:00:48.432 ********** 2025-05-17 01:06:05.242063 | orchestrator | [WARNING]: Skipped 2025-05-17 01:06:05.242068 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-17 01:06:05.242073 | orchestrator | due to this access issue: 2025-05-17 01:06:05.242080 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-17 01:06:05.242084 | orchestrator | a directory 2025-05-17 01:06:05.242088 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-17 01:06:05.242092 | orchestrator | 2025-05-17 01:06:05.242096 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-17 01:06:05.242100 | orchestrator | Saturday 17 May 2025 01:02:03 +0000 (0:00:00.603) 0:00:49.035 ********** 2025-05-17 01:06:05.242108 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 01:06:05.242115 | orchestrator | 2025-05-17 01:06:05.242119 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-17 01:06:05.242123 | orchestrator | Saturday 17 May 2025 01:02:04 +0000 (0:00:01.976) 0:00:51.011 ********** 2025-05-17 01:06:05.242129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.242138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.242143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.242148 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-17 01:06:05.242159 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-17 01:06:05.242164 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-17 01:06:05.242172 | orchestrator | 2025-05-17 01:06:05.242177 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-17 01:06:05.242182 | orchestrator | Saturday 17 May 2025 01:02:09 +0000 (0:00:04.964) 0:00:55.976 ********** 2025-05-17 01:06:05.242187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.242192 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.242196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.242200 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.242208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.242212 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.242220 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.242227 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.242231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.242235 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.242239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.242243 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.242247 | orchestrator | 2025-05-17 01:06:05.242250 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-17 01:06:05.242254 | orchestrator | Saturday 17 May 2025 01:02:13 +0000 (0:00:03.449) 0:00:59.426 ********** 2025-05-17 01:06:05.242258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.242262 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.242271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.242279 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.242283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.242287 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.242292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.242296 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.242300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.242304 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.242308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.242312 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.242316 | orchestrator | 2025-05-17 01:06:05.242322 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-17 01:06:05.242344 | orchestrator | Saturday 17 May 2025 01:02:16 +0000 (0:00:03.325) 0:01:02.751 ********** 2025-05-17 01:06:05.242348 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.242352 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.242356 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.242360 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.242364 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.242368 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.242371 | orchestrator | 2025-05-17 01:06:05.242378 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-17 01:06:05.242382 | orchestrator | Saturday 17 May 2025 01:02:21 +0000 (0:00:04.743) 0:01:07.495 ********** 2025-05-17 01:06:05.242385 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.242389 | orchestrator | 2025-05-17 01:06:05.242393 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-17 01:06:05.242397 | orchestrator | Saturday 17 May 2025 01:02:21 +0000 (0:00:00.149) 0:01:07.644 ********** 2025-05-17 01:06:05.242402 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.242408 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.242414 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.242419 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.242425 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.242431 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.242438 | orchestrator | 2025-05-17 01:06:05.242442 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-17 01:06:05.242446 | orchestrator | Saturday 17 May 2025 01:02:22 +0000 (0:00:00.757) 0:01:08.401 ********** 2025-05-17 01:06:05.242450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.242454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.242459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.242470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.242477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.242481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.242485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.242489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.242493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.242500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.242938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.242968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.242974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.242979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.242985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.242998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.243016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243022 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.243027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.243032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.243066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.243076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.243080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.243100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.243113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.243118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.243129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.243142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243147 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.243154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.243158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.243190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.243199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.243203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.243214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.243268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.243273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.243287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.243291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243295 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.243305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.243309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.243334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.243345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.243349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.243361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.243371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.243378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.243389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.243393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243397 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.243403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.243410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.243431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.243487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.243493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.243505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.243513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.243547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.243564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.243569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243574 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.243578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.243589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.243636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.243650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.243658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.243671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.243679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.243685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.243700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.243704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.243772 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.243776 | orchestrator | 2025-05-17 01:06:05.243781 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-17 01:06:05.243786 | orchestrator | Saturday 17 May 2025 01:02:26 +0000 (0:00:04.403) 0:01:12.805 ********** 2025-05-17 01:06:05.243790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.244355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.244510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.244520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.244575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.244591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.244599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.244604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.244629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.244633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.244642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.244678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.244686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.244703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.244710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.244759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.244790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.244979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.244988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.244992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.244996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.245000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245034 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245042 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.245047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.245051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.245055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.245059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.245091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.245107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.245123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.245187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.245302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.245323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.245332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.245351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.245358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.245362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.245378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.245423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.245429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.245441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.245445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245458 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-17 01:06:05.245465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-17 01:06:05.245470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.245481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.245489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.245514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.245518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.245525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.245529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245549 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-17 01:06:05.245554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.245563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.245571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.245584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.245595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.245654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.245660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245664 | orchestrator | 2025-05-17 01:06:05.245668 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-17 01:06:05.245673 | orchestrator | Saturday 17 May 2025 01:02:32 +0000 (0:00:05.290) 0:01:18.096 ********** 2025-05-17 01:06:05.245689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.245695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.245743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.245760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.245769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.245778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.245945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.245953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.245957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.245982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.245994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.246009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.246073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.246078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.246086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.246121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.246129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.246142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.246157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.246165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.246169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.246193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.246198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.246206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.246238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.246246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.246250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.246295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.246303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.246308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.246338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.246343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.246351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.246546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.246555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.246559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.246583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.246592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.246596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.246607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.246622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246627 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-17 01:06:05.246631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-17 01:06:05.246639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.246646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.246666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.246670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.246678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.246685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246697 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.246706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.246714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.246743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246757 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-17 01:06:05.246765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.246773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.246777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.246796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.246803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246808 | orchestrator | 2025-05-17 01:06:05.246812 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-17 01:06:05.246816 | orchestrator | Saturday 17 May 2025 01:02:38 +0000 (0:00:06.635) 0:01:24.732 ********** 2025-05-17 01:06:05.246820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.246824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.246988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.246993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.246997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.247005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.247026 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247031 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.247035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.247039 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247046 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.247050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.247093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247099 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.247103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.247107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.247228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.247237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.247245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.247254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.247276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.247280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.247366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.247370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247421 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.247433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.247438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.247471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.247483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.247488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.247492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.247515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.247568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.247572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.247662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.247666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.247671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.247694 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.247698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.247706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.247799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.247819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.247826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.247839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.247945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.247970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.247987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.248007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.248029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.248039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.248047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.248055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.248069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.248083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.248087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.248104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.248161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.248180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.248191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.248199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.248207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.248211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.248235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.248239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248243 | orchestrator | 2025-05-17 01:06:05.248247 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-17 01:06:05.248252 | orchestrator | Saturday 17 May 2025 01:02:42 +0000 (0:00:04.244) 0:01:28.976 ********** 2025-05-17 01:06:05.248256 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.248260 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:06:05.248264 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.248267 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.248271 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:06:05.248275 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:06:05.248278 | orchestrator | 2025-05-17 01:06:05.248282 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-17 01:06:05.248286 | orchestrator | Saturday 17 May 2025 01:02:47 +0000 (0:00:04.619) 0:01:33.596 ********** 2025-05-17 01:06:05.248290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.248307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.248326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.248352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.248359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.248368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.248378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.248383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.248403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.248407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248411 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.248415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.248423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.248451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.248462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.248475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.248487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.248499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.248506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.248529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.248533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248537 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.248541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.248552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.248800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.248815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.248820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.248877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.248890 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.248895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.248918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.248924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248929 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.248934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.248942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.248975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.248982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.248986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.248991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.249013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.249024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.249028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.249045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.249053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.249064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.249092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.249103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.249107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.249124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.249138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.249142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.249151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.249164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.249178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.249213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.249227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.249231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.249239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.249263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.249267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.249275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.249279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249283 | orchestrator | 2025-05-17 01:06:05.249287 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-17 01:06:05.249291 | orchestrator | Saturday 17 May 2025 01:02:52 +0000 (0:00:04.422) 0:01:38.018 ********** 2025-05-17 01:06:05.249299 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.249313 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.249317 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.249321 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.249325 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.249329 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.249333 | orchestrator | 2025-05-17 01:06:05.249337 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-17 01:06:05.249340 | orchestrator | Saturday 17 May 2025 01:02:55 +0000 (0:00:03.206) 0:01:41.224 ********** 2025-05-17 01:06:05.249344 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.249352 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.249356 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.249360 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.249363 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.249367 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.249371 | orchestrator | 2025-05-17 01:06:05.249375 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-17 01:06:05.249379 | orchestrator | Saturday 17 May 2025 01:02:57 +0000 (0:00:02.469) 0:01:43.693 ********** 2025-05-17 01:06:05.249383 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.249386 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.249390 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.249394 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.249398 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.249402 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.249406 | orchestrator | 2025-05-17 01:06:05.249410 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-17 01:06:05.249414 | orchestrator | Saturday 17 May 2025 01:02:59 +0000 (0:00:02.172) 0:01:45.865 ********** 2025-05-17 01:06:05.249418 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.249421 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.249425 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.249429 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.249433 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.249437 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.249440 | orchestrator | 2025-05-17 01:06:05.249444 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-17 01:06:05.249448 | orchestrator | Saturday 17 May 2025 01:03:02 +0000 (0:00:02.398) 0:01:48.264 ********** 2025-05-17 01:06:05.249452 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.249456 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.249460 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.249464 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.249468 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.249471 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.249475 | orchestrator | 2025-05-17 01:06:05.249479 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-17 01:06:05.249483 | orchestrator | Saturday 17 May 2025 01:03:05 +0000 (0:00:03.688) 0:01:51.953 ********** 2025-05-17 01:06:05.249487 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.249491 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.249495 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.249498 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.249503 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.249506 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.249510 | orchestrator | 2025-05-17 01:06:05.249514 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-17 01:06:05.249518 | orchestrator | Saturday 17 May 2025 01:03:07 +0000 (0:00:02.055) 0:01:54.009 ********** 2025-05-17 01:06:05.249522 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-17 01:06:05.249525 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.249534 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-17 01:06:05.249538 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.249541 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-17 01:06:05.249545 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.249549 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-17 01:06:05.249553 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.249557 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-17 01:06:05.249560 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.249564 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-17 01:06:05.249568 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.249572 | orchestrator | 2025-05-17 01:06:05.249575 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-17 01:06:05.249579 | orchestrator | Saturday 17 May 2025 01:03:09 +0000 (0:00:01.899) 0:01:55.908 ********** 2025-05-17 01:06:05.249594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.249602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.249622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.249635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.249639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.249650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.249659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.249669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.249678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.249686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249690 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.249694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.249701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.249712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.250051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.250078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.250087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.250100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.250109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.250113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.250137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.250145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250149 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.250153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.250157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.250194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.250202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.250219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.250232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.250240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.250245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.250297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.250305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250310 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.250314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.250318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.250353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.250361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.250365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.250441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.250451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.250458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.250490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.250495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250499 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.250503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.250507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.250541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.250549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.250553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.250579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.250587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.250592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.250616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.250624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250628 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.250632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.250636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.250668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.250676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.250680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.250704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.250713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.250732 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.250738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.250749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.251028 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251041 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.251046 | orchestrator | 2025-05-17 01:06:05.251050 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-17 01:06:05.251054 | orchestrator | Saturday 17 May 2025 01:03:11 +0000 (0:00:02.084) 0:01:57.993 ********** 2025-05-17 01:06:05.251058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.251063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.251128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.251165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.251176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.251193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.251205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251210 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.251217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.251221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.251254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.251278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.251299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.251315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.251319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251323 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.251339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.251344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.251372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.251398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.251415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.251434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.251438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251442 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.251455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.251462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.251482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.251518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.251526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.251555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.251559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251563 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.251567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.251585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.251609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.251646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.251654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.251681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.251685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251689 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.251693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.251709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.251775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'koll2025-05-17 01:06:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:05.251780 | orchestrator | a_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.251814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.251824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.251829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.251849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.251853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.251857 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.251861 | orchestrator | 2025-05-17 01:06:05.251865 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-17 01:06:05.251869 | orchestrator | Saturday 17 May 2025 01:03:16 +0000 (0:00:04.249) 0:02:02.243 ********** 2025-05-17 01:06:05.251873 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.251877 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.251880 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.251884 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.251888 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.251892 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.251896 | orchestrator | 2025-05-17 01:06:05.251899 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-17 01:06:05.251903 | orchestrator | Saturday 17 May 2025 01:03:19 +0000 (0:00:02.996) 0:02:05.239 ********** 2025-05-17 01:06:05.251907 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.251911 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.251915 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.251918 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:06:05.251922 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:06:05.251926 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:06:05.251930 | orchestrator | 2025-05-17 01:06:05.251934 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-17 01:06:05.251938 | orchestrator | Saturday 17 May 2025 01:03:23 +0000 (0:00:04.461) 0:02:09.701 ********** 2025-05-17 01:06:05.251941 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.251945 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.251949 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.251953 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.251962 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.251970 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.252137 | orchestrator | 2025-05-17 01:06:05.252142 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-17 01:06:05.252146 | orchestrator | Saturday 17 May 2025 01:03:26 +0000 (0:00:02.531) 0:02:12.232 ********** 2025-05-17 01:06:05.252150 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.252154 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.252158 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.252162 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.252165 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.252169 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.252173 | orchestrator | 2025-05-17 01:06:05.252177 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-17 01:06:05.252181 | orchestrator | Saturday 17 May 2025 01:03:29 +0000 (0:00:03.702) 0:02:15.935 ********** 2025-05-17 01:06:05.252184 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.252188 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.252192 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.252196 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.252200 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.252204 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.252207 | orchestrator | 2025-05-17 01:06:05.252211 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-17 01:06:05.252225 | orchestrator | Saturday 17 May 2025 01:03:32 +0000 (0:00:02.707) 0:02:18.642 ********** 2025-05-17 01:06:05.252230 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.252233 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.252237 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.252241 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.252244 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.252248 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.252252 | orchestrator | 2025-05-17 01:06:05.252256 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-17 01:06:05.252263 | orchestrator | Saturday 17 May 2025 01:03:34 +0000 (0:00:02.095) 0:02:20.737 ********** 2025-05-17 01:06:05.252266 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.252270 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.252274 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.252278 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.252281 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.252285 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.252289 | orchestrator | 2025-05-17 01:06:05.252293 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-17 01:06:05.252297 | orchestrator | Saturday 17 May 2025 01:03:37 +0000 (0:00:02.636) 0:02:23.373 ********** 2025-05-17 01:06:05.252301 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.252304 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.252308 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.252312 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.252316 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.252320 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.252323 | orchestrator | 2025-05-17 01:06:05.252327 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-17 01:06:05.252331 | orchestrator | Saturday 17 May 2025 01:03:41 +0000 (0:00:04.164) 0:02:27.537 ********** 2025-05-17 01:06:05.252335 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.252339 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.252343 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.252346 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.252350 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.252354 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.252364 | orchestrator | 2025-05-17 01:06:05.252368 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-17 01:06:05.252371 | orchestrator | Saturday 17 May 2025 01:03:43 +0000 (0:00:02.284) 0:02:29.822 ********** 2025-05-17 01:06:05.252375 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.252379 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.252383 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.252387 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.252390 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.252394 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.252398 | orchestrator | 2025-05-17 01:06:05.252402 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-17 01:06:05.252405 | orchestrator | Saturday 17 May 2025 01:03:46 +0000 (0:00:03.172) 0:02:32.995 ********** 2025-05-17 01:06:05.252409 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-17 01:06:05.252413 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.252417 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-17 01:06:05.252421 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.252425 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-17 01:06:05.252429 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.252433 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-17 01:06:05.252437 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.252440 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-17 01:06:05.252444 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.252448 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-17 01:06:05.252452 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.252456 | orchestrator | 2025-05-17 01:06:05.252459 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-17 01:06:05.252463 | orchestrator | Saturday 17 May 2025 01:03:50 +0000 (0:00:03.162) 0:02:36.157 ********** 2025-05-17 01:06:05.252467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.252485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.252505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.252543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.252549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.252562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.252570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.252598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.252614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.252619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.252627 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.252640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.252664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.252672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.252692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.252701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.252709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.252713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.252758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.252766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252772 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.252779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.252785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.252820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.252830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.252842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.252854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.252864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.252869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.252888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.252893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252898 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.252903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.252907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.252925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.253194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.253236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.253244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.253273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.253278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253282 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.253286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.253290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253299 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.253325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.253330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253367 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.253397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.253425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.253429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.253459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.253469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253481 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.253485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253489 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.253520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.253532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.253548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.253562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253567 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.253571 | orchestrator | 2025-05-17 01:06:05.253575 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-17 01:06:05.253581 | orchestrator | Saturday 17 May 2025 01:03:53 +0000 (0:00:03.173) 0:02:39.331 ********** 2025-05-17 01:06:05.253586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.253590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.253622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253626 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.253655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.253687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-17 01:06:05.253759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.253769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.253776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.253800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.253813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.253835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.253853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.253878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.253913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.253927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.253941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.253945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.253953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.253960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.254005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.254012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.254041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.254057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.254069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-17 01:06:05.254077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.254102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.254111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.254115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254120 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-17 01:06:05.254145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.254155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.254159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-17 01:06:05.254170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.254180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.254189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-17 01:06:05.254214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.254222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.254226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.254245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.254256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.254260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.254268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.254278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254284 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-17 01:06:05.254289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:06:05.254297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:06:05.254301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-17 01:06:05.254329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-17 01:06:05.254334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-17 01:06:05.254338 | orchestrator | 2025-05-17 01:06:05.254342 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-17 01:06:05.254346 | orchestrator | Saturday 17 May 2025 01:03:57 +0000 (0:00:04.163) 0:02:43.494 ********** 2025-05-17 01:06:05.254350 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:06:05.254355 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:06:05.254358 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:06:05.254362 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:06:05.254366 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:06:05.254370 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:06:05.254374 | orchestrator | 2025-05-17 01:06:05.254377 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-17 01:06:05.254381 | orchestrator | Saturday 17 May 2025 01:03:58 +0000 (0:00:00.685) 0:02:44.180 ********** 2025-05-17 01:06:05.254385 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:06:05.254389 | orchestrator | 2025-05-17 01:06:05.254393 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-17 01:06:05.254396 | orchestrator | Saturday 17 May 2025 01:04:00 +0000 (0:00:02.522) 0:02:46.703 ********** 2025-05-17 01:06:05.254403 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:06:05.254407 | orchestrator | 2025-05-17 01:06:05.254411 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-17 01:06:05.254414 | orchestrator | Saturday 17 May 2025 01:04:02 +0000 (0:00:02.162) 0:02:48.865 ********** 2025-05-17 01:06:05.254418 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:06:05.254422 | orchestrator | 2025-05-17 01:06:05.254426 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-17 01:06:05.254430 | orchestrator | Saturday 17 May 2025 01:04:40 +0000 (0:00:37.661) 0:03:26.527 ********** 2025-05-17 01:06:05.254434 | orchestrator | 2025-05-17 01:06:05.254438 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-17 01:06:05.254441 | orchestrator | Saturday 17 May 2025 01:04:40 +0000 (0:00:00.064) 0:03:26.592 ********** 2025-05-17 01:06:05.254445 | orchestrator | 2025-05-17 01:06:05.254449 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-17 01:06:05.254453 | orchestrator | Saturday 17 May 2025 01:04:40 +0000 (0:00:00.286) 0:03:26.879 ********** 2025-05-17 01:06:05.254457 | orchestrator | 2025-05-17 01:06:05.254460 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-17 01:06:05.254464 | orchestrator | Saturday 17 May 2025 01:04:40 +0000 (0:00:00.054) 0:03:26.933 ********** 2025-05-17 01:06:05.254468 | orchestrator | 2025-05-17 01:06:05.254472 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-17 01:06:05.254475 | orchestrator | Saturday 17 May 2025 01:04:40 +0000 (0:00:00.054) 0:03:26.987 ********** 2025-05-17 01:06:05.254479 | orchestrator | 2025-05-17 01:06:05.254483 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-17 01:06:05.254487 | orchestrator | Saturday 17 May 2025 01:04:41 +0000 (0:00:00.052) 0:03:27.040 ********** 2025-05-17 01:06:05.254491 | orchestrator | 2025-05-17 01:06:05.254494 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-17 01:06:05.254498 | orchestrator | Saturday 17 May 2025 01:04:41 +0000 (0:00:00.246) 0:03:27.287 ********** 2025-05-17 01:06:05.254502 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:06:05.254506 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:06:05.254510 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:06:05.254514 | orchestrator | 2025-05-17 01:06:05.254518 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-17 01:06:05.254522 | orchestrator | Saturday 17 May 2025 01:05:08 +0000 (0:00:27.043) 0:03:54.330 ********** 2025-05-17 01:06:05.254525 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:06:05.254529 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:06:05.254533 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:06:05.254537 | orchestrator | 2025-05-17 01:06:05.254543 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:06:05.254548 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-17 01:06:05.254553 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-17 01:06:05.254565 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-17 01:06:05.254570 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-17 01:06:05.254574 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-17 01:06:05.254578 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-17 01:06:05.254582 | orchestrator | 2025-05-17 01:06:05.254589 | orchestrator | 2025-05-17 01:06:05.254593 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:06:05.254596 | orchestrator | Saturday 17 May 2025 01:06:03 +0000 (0:00:55.042) 0:04:49.373 ********** 2025-05-17 01:06:05.254600 | orchestrator | =============================================================================== 2025-05-17 01:06:05.254604 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 55.04s 2025-05-17 01:06:05.254608 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 37.66s 2025-05-17 01:06:05.254612 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.04s 2025-05-17 01:06:05.254616 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.27s 2025-05-17 01:06:05.254620 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.64s 2025-05-17 01:06:05.254624 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.30s 2025-05-17 01:06:05.254628 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.29s 2025-05-17 01:06:05.254631 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.96s 2025-05-17 01:06:05.254635 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 4.74s 2025-05-17 01:06:05.254639 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.62s 2025-05-17 01:06:05.254642 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.46s 2025-05-17 01:06:05.254646 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.42s 2025-05-17 01:06:05.254650 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.40s 2025-05-17 01:06:05.254654 | orchestrator | Load and persist kernel modules ----------------------------------------- 4.34s 2025-05-17 01:06:05.254657 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 4.25s 2025-05-17 01:06:05.254661 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 4.24s 2025-05-17 01:06:05.254665 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 4.16s 2025-05-17 01:06:05.254669 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.16s 2025-05-17 01:06:05.254672 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.70s 2025-05-17 01:06:05.254676 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 3.69s 2025-05-17 01:06:08.271882 | orchestrator | 2025-05-17 01:06:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:08.271988 | orchestrator | 2025-05-17 01:06:08 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:08.272255 | orchestrator | 2025-05-17 01:06:08 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:08.272750 | orchestrator | 2025-05-17 01:06:08 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:08.273209 | orchestrator | 2025-05-17 01:06:08 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:08.273234 | orchestrator | 2025-05-17 01:06:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:11.299237 | orchestrator | 2025-05-17 01:06:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:11.299352 | orchestrator | 2025-05-17 01:06:11 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:11.299633 | orchestrator | 2025-05-17 01:06:11 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:11.300288 | orchestrator | 2025-05-17 01:06:11 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:11.300782 | orchestrator | 2025-05-17 01:06:11 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:11.300837 | orchestrator | 2025-05-17 01:06:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:14.335535 | orchestrator | 2025-05-17 01:06:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:14.337947 | orchestrator | 2025-05-17 01:06:14 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:14.340318 | orchestrator | 2025-05-17 01:06:14 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:14.341984 | orchestrator | 2025-05-17 01:06:14 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:14.343754 | orchestrator | 2025-05-17 01:06:14 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:14.344273 | orchestrator | 2025-05-17 01:06:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:17.397419 | orchestrator | 2025-05-17 01:06:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:17.397528 | orchestrator | 2025-05-17 01:06:17 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:17.401120 | orchestrator | 2025-05-17 01:06:17 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:17.401414 | orchestrator | 2025-05-17 01:06:17 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:17.405587 | orchestrator | 2025-05-17 01:06:17 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:17.405673 | orchestrator | 2025-05-17 01:06:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:20.445790 | orchestrator | 2025-05-17 01:06:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:20.445927 | orchestrator | 2025-05-17 01:06:20 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:20.446834 | orchestrator | 2025-05-17 01:06:20 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:20.447658 | orchestrator | 2025-05-17 01:06:20 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:20.448616 | orchestrator | 2025-05-17 01:06:20 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:20.448663 | orchestrator | 2025-05-17 01:06:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:23.482367 | orchestrator | 2025-05-17 01:06:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:23.482500 | orchestrator | 2025-05-17 01:06:23 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:23.482832 | orchestrator | 2025-05-17 01:06:23 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:23.483365 | orchestrator | 2025-05-17 01:06:23 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:23.484317 | orchestrator | 2025-05-17 01:06:23 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:23.484339 | orchestrator | 2025-05-17 01:06:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:26.519213 | orchestrator | 2025-05-17 01:06:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:26.519433 | orchestrator | 2025-05-17 01:06:26 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:26.520779 | orchestrator | 2025-05-17 01:06:26 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:26.524003 | orchestrator | 2025-05-17 01:06:26 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:26.525382 | orchestrator | 2025-05-17 01:06:26 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:26.525442 | orchestrator | 2025-05-17 01:06:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:29.577405 | orchestrator | 2025-05-17 01:06:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:29.579412 | orchestrator | 2025-05-17 01:06:29 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:29.580896 | orchestrator | 2025-05-17 01:06:29 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:29.582608 | orchestrator | 2025-05-17 01:06:29 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:29.583837 | orchestrator | 2025-05-17 01:06:29 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:29.583866 | orchestrator | 2025-05-17 01:06:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:32.639532 | orchestrator | 2025-05-17 01:06:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:32.643402 | orchestrator | 2025-05-17 01:06:32 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:32.644311 | orchestrator | 2025-05-17 01:06:32 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:32.646835 | orchestrator | 2025-05-17 01:06:32 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:32.648422 | orchestrator | 2025-05-17 01:06:32 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:32.648767 | orchestrator | 2025-05-17 01:06:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:35.712127 | orchestrator | 2025-05-17 01:06:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:35.713077 | orchestrator | 2025-05-17 01:06:35 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:35.715610 | orchestrator | 2025-05-17 01:06:35 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:35.719823 | orchestrator | 2025-05-17 01:06:35 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:35.722350 | orchestrator | 2025-05-17 01:06:35 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:35.722397 | orchestrator | 2025-05-17 01:06:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:38.772627 | orchestrator | 2025-05-17 01:06:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:38.777023 | orchestrator | 2025-05-17 01:06:38 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:38.779449 | orchestrator | 2025-05-17 01:06:38 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:38.781955 | orchestrator | 2025-05-17 01:06:38 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:38.784866 | orchestrator | 2025-05-17 01:06:38 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:38.785284 | orchestrator | 2025-05-17 01:06:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:41.829926 | orchestrator | 2025-05-17 01:06:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:41.830072 | orchestrator | 2025-05-17 01:06:41 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:41.830342 | orchestrator | 2025-05-17 01:06:41 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:41.832008 | orchestrator | 2025-05-17 01:06:41 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:41.832622 | orchestrator | 2025-05-17 01:06:41 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:41.832795 | orchestrator | 2025-05-17 01:06:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:44.860326 | orchestrator | 2025-05-17 01:06:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:44.860419 | orchestrator | 2025-05-17 01:06:44 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:44.860899 | orchestrator | 2025-05-17 01:06:44 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:44.861367 | orchestrator | 2025-05-17 01:06:44 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:44.861910 | orchestrator | 2025-05-17 01:06:44 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:44.861936 | orchestrator | 2025-05-17 01:06:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:47.898383 | orchestrator | 2025-05-17 01:06:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:47.900271 | orchestrator | 2025-05-17 01:06:47 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:47.902441 | orchestrator | 2025-05-17 01:06:47 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:47.904055 | orchestrator | 2025-05-17 01:06:47 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:47.904658 | orchestrator | 2025-05-17 01:06:47 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:47.904692 | orchestrator | 2025-05-17 01:06:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:50.967281 | orchestrator | 2025-05-17 01:06:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:50.970551 | orchestrator | 2025-05-17 01:06:50 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:50.972961 | orchestrator | 2025-05-17 01:06:50 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:50.975169 | orchestrator | 2025-05-17 01:06:50 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:50.977209 | orchestrator | 2025-05-17 01:06:50 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:50.977284 | orchestrator | 2025-05-17 01:06:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:54.029107 | orchestrator | 2025-05-17 01:06:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:54.029725 | orchestrator | 2025-05-17 01:06:54 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:54.030520 | orchestrator | 2025-05-17 01:06:54 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:54.031362 | orchestrator | 2025-05-17 01:06:54 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:54.032181 | orchestrator | 2025-05-17 01:06:54 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:54.032228 | orchestrator | 2025-05-17 01:06:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:06:57.077184 | orchestrator | 2025-05-17 01:06:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:06:57.079320 | orchestrator | 2025-05-17 01:06:57 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:06:57.081934 | orchestrator | 2025-05-17 01:06:57 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:06:57.084766 | orchestrator | 2025-05-17 01:06:57 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:06:57.086957 | orchestrator | 2025-05-17 01:06:57 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:06:57.087169 | orchestrator | 2025-05-17 01:06:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:00.141328 | orchestrator | 2025-05-17 01:07:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:00.143206 | orchestrator | 2025-05-17 01:07:00 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:07:00.145186 | orchestrator | 2025-05-17 01:07:00 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:07:00.147370 | orchestrator | 2025-05-17 01:07:00 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:00.149177 | orchestrator | 2025-05-17 01:07:00 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:00.149200 | orchestrator | 2025-05-17 01:07:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:03.193949 | orchestrator | 2025-05-17 01:07:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:03.194388 | orchestrator | 2025-05-17 01:07:03 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:07:03.196466 | orchestrator | 2025-05-17 01:07:03 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:07:03.199121 | orchestrator | 2025-05-17 01:07:03 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:03.202919 | orchestrator | 2025-05-17 01:07:03 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:03.202949 | orchestrator | 2025-05-17 01:07:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:06.247647 | orchestrator | 2025-05-17 01:07:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:06.250951 | orchestrator | 2025-05-17 01:07:06 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:07:06.253334 | orchestrator | 2025-05-17 01:07:06 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:07:06.255440 | orchestrator | 2025-05-17 01:07:06 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:06.258123 | orchestrator | 2025-05-17 01:07:06 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:06.258197 | orchestrator | 2025-05-17 01:07:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:09.287951 | orchestrator | 2025-05-17 01:07:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:09.288088 | orchestrator | 2025-05-17 01:07:09 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:07:09.289046 | orchestrator | 2025-05-17 01:07:09 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:07:09.290618 | orchestrator | 2025-05-17 01:07:09 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:09.290718 | orchestrator | 2025-05-17 01:07:09 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:09.290733 | orchestrator | 2025-05-17 01:07:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:12.323567 | orchestrator | 2025-05-17 01:07:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:12.324549 | orchestrator | 2025-05-17 01:07:12 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:07:12.326174 | orchestrator | 2025-05-17 01:07:12 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:07:12.327459 | orchestrator | 2025-05-17 01:07:12 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:12.329229 | orchestrator | 2025-05-17 01:07:12 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:12.329258 | orchestrator | 2025-05-17 01:07:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:15.380285 | orchestrator | 2025-05-17 01:07:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:15.380382 | orchestrator | 2025-05-17 01:07:15 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:07:15.383009 | orchestrator | 2025-05-17 01:07:15 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:07:15.383151 | orchestrator | 2025-05-17 01:07:15 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:15.384046 | orchestrator | 2025-05-17 01:07:15 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:15.384084 | orchestrator | 2025-05-17 01:07:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:18.428896 | orchestrator | 2025-05-17 01:07:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:18.429274 | orchestrator | 2025-05-17 01:07:18 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:07:18.430124 | orchestrator | 2025-05-17 01:07:18 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:07:18.430741 | orchestrator | 2025-05-17 01:07:18 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:18.433648 | orchestrator | 2025-05-17 01:07:18 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:18.433715 | orchestrator | 2025-05-17 01:07:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:21.471055 | orchestrator | 2025-05-17 01:07:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:21.473076 | orchestrator | 2025-05-17 01:07:21 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:07:21.475243 | orchestrator | 2025-05-17 01:07:21 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:07:21.476386 | orchestrator | 2025-05-17 01:07:21 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:21.477931 | orchestrator | 2025-05-17 01:07:21 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:21.477966 | orchestrator | 2025-05-17 01:07:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:24.520940 | orchestrator | 2025-05-17 01:07:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:24.521040 | orchestrator | 2025-05-17 01:07:24 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:07:24.521130 | orchestrator | 2025-05-17 01:07:24 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:07:24.521142 | orchestrator | 2025-05-17 01:07:24 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:24.521152 | orchestrator | 2025-05-17 01:07:24 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:24.521175 | orchestrator | 2025-05-17 01:07:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:27.574760 | orchestrator | 2025-05-17 01:07:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:27.576350 | orchestrator | 2025-05-17 01:07:27 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:07:27.577856 | orchestrator | 2025-05-17 01:07:27 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:07:27.579850 | orchestrator | 2025-05-17 01:07:27 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:27.581647 | orchestrator | 2025-05-17 01:07:27 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:27.581718 | orchestrator | 2025-05-17 01:07:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:30.627340 | orchestrator | 2025-05-17 01:07:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:30.628554 | orchestrator | 2025-05-17 01:07:30 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:07:30.630440 | orchestrator | 2025-05-17 01:07:30 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:07:30.632458 | orchestrator | 2025-05-17 01:07:30 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:30.634214 | orchestrator | 2025-05-17 01:07:30 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:30.634256 | orchestrator | 2025-05-17 01:07:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:33.683905 | orchestrator | 2025-05-17 01:07:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:33.685028 | orchestrator | 2025-05-17 01:07:33 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:07:33.686885 | orchestrator | 2025-05-17 01:07:33 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state STARTED 2025-05-17 01:07:33.688176 | orchestrator | 2025-05-17 01:07:33 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:33.690338 | orchestrator | 2025-05-17 01:07:33 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:33.690423 | orchestrator | 2025-05-17 01:07:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:36.753128 | orchestrator | 2025-05-17 01:07:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:36.754974 | orchestrator | 2025-05-17 01:07:36 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:07:36.756699 | orchestrator | 2025-05-17 01:07:36 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:07:36.761476 | orchestrator | 2025-05-17 01:07:36.761523 | orchestrator | 2025-05-17 01:07:36 | INFO  | Task a2d1c7ce-6c51-48ec-917e-42387ab78233 is in state SUCCESS 2025-05-17 01:07:36.762494 | orchestrator | 2025-05-17 01:07:36.762549 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 01:07:36.762564 | orchestrator | 2025-05-17 01:07:36.762575 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 01:07:36.762619 | orchestrator | Saturday 17 May 2025 01:03:41 +0000 (0:00:00.313) 0:00:00.313 ********** 2025-05-17 01:07:36.762631 | orchestrator | ok: [testbed-manager] 2025-05-17 01:07:36.762645 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:07:36.762657 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:07:36.762668 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:07:36.762766 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:07:36.762778 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:07:36.762789 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:07:36.762861 | orchestrator | 2025-05-17 01:07:36.762906 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 01:07:36.762917 | orchestrator | Saturday 17 May 2025 01:03:41 +0000 (0:00:00.635) 0:00:00.949 ********** 2025-05-17 01:07:36.762929 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-17 01:07:36.762940 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-17 01:07:36.762951 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-17 01:07:36.762962 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-17 01:07:36.762972 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-17 01:07:36.762983 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-17 01:07:36.762993 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-17 01:07:36.763004 | orchestrator | 2025-05-17 01:07:36.763015 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-17 01:07:36.763025 | orchestrator | 2025-05-17 01:07:36.763036 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-17 01:07:36.763046 | orchestrator | Saturday 17 May 2025 01:03:42 +0000 (0:00:01.280) 0:00:02.230 ********** 2025-05-17 01:07:36.763074 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 01:07:36.763087 | orchestrator | 2025-05-17 01:07:36.763100 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-17 01:07:36.763113 | orchestrator | Saturday 17 May 2025 01:03:44 +0000 (0:00:01.322) 0:00:03.552 ********** 2025-05-17 01:07:36.763131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.763149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.763163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.763214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.763230 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-17 01:07:36.763250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.763263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.763277 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.763290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.763319 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.763334 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.763347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.763366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.763380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.763393 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.763414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.763434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.763448 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.763462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.763481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.763493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.763504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.763516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.763534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.763555 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-17 01:07:36.763575 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.763587 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.763599 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.763610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.763629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.763646 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.763659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.763734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.763749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.763769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.763781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.763801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.763813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.763830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.763842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.763871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.763883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.763902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.763915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.763927 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.763943 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.763955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.763974 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.764320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.764344 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.764374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.764396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.764444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.764476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.764496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.765887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.766438 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.766454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.766480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.766492 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.766503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.766552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.766566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.766577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.766594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.766613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.766625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.766637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.766649 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.767181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.767207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.767228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.767251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.767264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.767275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.767299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.767311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.767328 | orchestrator | 2025-05-17 01:07:36.767348 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-17 01:07:36.767368 | orchestrator | Saturday 17 May 2025 01:03:48 +0000 (0:00:04.521) 0:00:08.074 ********** 2025-05-17 01:07:36.767398 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 01:07:36.767432 | orchestrator | 2025-05-17 01:07:36.767450 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-17 01:07:36.767469 | orchestrator | Saturday 17 May 2025 01:03:51 +0000 (0:00:02.443) 0:00:10.517 ********** 2025-05-17 01:07:36.767495 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-17 01:07:36.767516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.767537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.767557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.767649 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.767704 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.767718 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.767746 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.767759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.767770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.767782 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.767793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.767815 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.767862 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.767893 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.767905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.767917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.767928 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.767940 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.767951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.767972 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-17 01:07:36.767993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.768009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.768021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.768033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.768045 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.768061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.768073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.768091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.768103 | orchestrator | 2025-05-17 01:07:36.768114 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-17 01:07:36.768126 | orchestrator | Saturday 17 May 2025 01:03:57 +0000 (0:00:06.521) 0:00:17.039 ********** 2025-05-17 01:07:36.768142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 01:07:36.768154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.768165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.768177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.768189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.768200 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:36.768226 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.768239 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 01:07:36.768255 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.768268 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.768280 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.768292 | orchestrator | skipping: [testbed-manager] 2025-05-17 01:07:36.768303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 01:07:36.768326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.768339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.768350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.768366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.768377 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:36.768389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 01:07:36.768400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.768411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.768430 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:36.768448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 01:07:36.768460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.768471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.768482 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:36.768499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 01:07:36.768510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.768522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.768533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.768551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.768562 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:36.768580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 01:07:36.768592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.768604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.768615 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:36.768626 | orchestrator | 2025-05-17 01:07:36.768844 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-17 01:07:36.768865 | orchestrator | Saturday 17 May 2025 01:03:59 +0000 (0:00:01.697) 0:00:18.736 ********** 2025-05-17 01:07:36.768877 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.768889 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 01:07:36.768900 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.768996 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.769015 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.769027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 01:07:36.769048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.769060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.769072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.769091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.769173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 01:07:36.769189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.769201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.769218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.769230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.769242 | orchestrator | skipping: [testbed-manager] 2025-05-17 01:07:36.769253 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:36.769265 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:36.769276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 01:07:36.769295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.769306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.769409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.769426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.769438 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:36.769449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 01:07:36.769466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.769478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.769497 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:36.769508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 01:07:36.769519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.769597 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.769613 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:36.769624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-17 01:07:36.769636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.769653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.769665 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:36.769754 | orchestrator | 2025-05-17 01:07:36.769767 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-17 01:07:36.769778 | orchestrator | Saturday 17 May 2025 01:04:01 +0000 (0:00:02.028) 0:00:20.765 ********** 2025-05-17 01:07:36.769790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.769811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.769860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.769874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.769892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.769904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.769923 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-17 01:07:36.769935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.769976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.769990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.770003 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.770051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770087 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.770099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770111 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.770156 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770182 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770199 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.770219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.770261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.770305 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.770319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.770331 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.770358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.770372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.770385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770438 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.770450 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.770474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.770487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.770528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.770542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.770595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.770655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.770694 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-17 01:07:36.770718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.770730 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.770741 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770752 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.770763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770803 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.770821 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.770837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.770859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.770870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.770907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.770920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.770943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.770955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.770966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.771004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.771024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.771040 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.771051 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.771062 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.771073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.771084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.771121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.771141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.771156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.771168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.771178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.771189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.771200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.771237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.771260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.771271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.771282 | orchestrator | 2025-05-17 01:07:36.771292 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-17 01:07:36.771303 | orchestrator | Saturday 17 May 2025 01:04:08 +0000 (0:00:06.493) 0:00:27.258 ********** 2025-05-17 01:07:36.771313 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-17 01:07:36.771323 | orchestrator | 2025-05-17 01:07:36.771333 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-17 01:07:36.771343 | orchestrator | Saturday 17 May 2025 01:04:08 +0000 (0:00:00.465) 0:00:27.724 ********** 2025-05-17 01:07:36.771357 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1090589, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2396913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771368 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1090589, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2396913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771379 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1090589, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2396913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771390 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1090589, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2396913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771435 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1090589, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2396913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771447 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1090589, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2396913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771458 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1090657, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2426913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771474 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1090657, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2426913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771485 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1090657, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2426913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771496 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1090657, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2426913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771507 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1090657, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2426913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771550 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1090652, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2406912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771562 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1090657, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2426913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771573 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1090589, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2396913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 01:07:36.771589 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1090652, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2406912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771599 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1090652, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2406912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771610 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1090652, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2406912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771628 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1090652, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2406912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771665 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090655, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2416914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771727 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1090652, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2406912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771739 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090655, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2416914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771755 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090655, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2416914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771766 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090655, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2416914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771776 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090678, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2466912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771796 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090655, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2416914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771836 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090655, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2416914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771849 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090678, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2466912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771859 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090678, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2466912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771875 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090678, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2466912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771885 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090659, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2436912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771896 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090678, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2466912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771913 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1090657, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2426913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 01:07:36.771951 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090678, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2466912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771964 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090659, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2436912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771975 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090659, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2436912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.771990 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090659, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2436912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772001 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090659, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2436912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772012 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090654, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2416914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772029 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090659, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2436912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772065 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090654, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2416914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772078 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090654, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2416914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772089 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090658, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2426913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772105 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090654, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2416914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772115 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090654, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2416914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772129 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090658, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2426913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772137 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090654, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2416914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772167 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090658, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2426913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772177 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090658, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2426913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772187 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090658, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2426913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772202 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090676, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2466912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772211 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090676, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2466912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772225 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090658, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2426913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772234 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1090652, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2406912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 01:07:36.772264 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090653, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2406912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772274 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090676, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2466912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772283 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090676, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2466912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772295 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090676, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2466912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772304 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090653, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2406912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772319 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1090665, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2436912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772327 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:36.772341 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090676, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2466912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772370 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090653, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2406912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772380 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1090665, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2436912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772389 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:36.772398 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090653, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2406912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772410 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090653, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2406912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772425 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090653, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2406912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772434 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1090665, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2436912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772442 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:36.772451 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1090665, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2436912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772459 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:36.772489 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1090665, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2436912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772499 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:36.772508 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1090665, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2436912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-17 01:07:36.772516 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:36.772525 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090655, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2416914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 01:07:36.772542 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090678, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2466912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 01:07:36.772551 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090659, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2436912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 01:07:36.772560 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090654, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2416914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 01:07:36.772568 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090658, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2426913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 01:07:36.772597 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090676, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2466912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 01:07:36.772607 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090653, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2406912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 01:07:36.772616 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1090665, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.2436912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-17 01:07:36.772630 | orchestrator | 2025-05-17 01:07:36.772638 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-17 01:07:36.772650 | orchestrator | Saturday 17 May 2025 01:04:42 +0000 (0:00:33.590) 0:01:01.315 ********** 2025-05-17 01:07:36.772658 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-17 01:07:36.772666 | orchestrator | 2025-05-17 01:07:36.772687 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-17 01:07:36.772696 | orchestrator | Saturday 17 May 2025 01:04:42 +0000 (0:00:00.458) 0:01:01.774 ********** 2025-05-17 01:07:36.772704 | orchestrator | [WARNING]: Skipped 2025-05-17 01:07:36.772713 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-17 01:07:36.772721 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-17 01:07:36.772729 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-17 01:07:36.772737 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-17 01:07:36.772746 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-17 01:07:36.772754 | orchestrator | [WARNING]: Skipped 2025-05-17 01:07:36.772762 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-17 01:07:36.772770 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-17 01:07:36.772777 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-17 01:07:36.772785 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-17 01:07:36.772793 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-17 01:07:36.772801 | orchestrator | [WARNING]: Skipped 2025-05-17 01:07:36.772809 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-17 01:07:36.772817 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-17 01:07:36.772824 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-17 01:07:36.772832 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-17 01:07:36.772840 | orchestrator | [WARNING]: Skipped 2025-05-17 01:07:36.772848 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-17 01:07:36.772856 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-17 01:07:36.772864 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-17 01:07:36.772871 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-17 01:07:36.772879 | orchestrator | [WARNING]: Skipped 2025-05-17 01:07:36.772887 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-17 01:07:36.772895 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-17 01:07:36.772903 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-17 01:07:36.772910 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-17 01:07:36.772918 | orchestrator | [WARNING]: Skipped 2025-05-17 01:07:36.772926 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-17 01:07:36.772934 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-17 01:07:36.772941 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-17 01:07:36.772949 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-17 01:07:36.772957 | orchestrator | [WARNING]: Skipped 2025-05-17 01:07:36.772965 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-17 01:07:36.772972 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-17 01:07:36.772980 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-17 01:07:36.773019 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-17 01:07:36.773028 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-17 01:07:36.773037 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-17 01:07:36.773045 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-17 01:07:36.773053 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-17 01:07:36.773061 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-17 01:07:36.773070 | orchestrator | 2025-05-17 01:07:36.773078 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-17 01:07:36.773086 | orchestrator | Saturday 17 May 2025 01:04:44 +0000 (0:00:01.880) 0:01:03.655 ********** 2025-05-17 01:07:36.773094 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-17 01:07:36.773102 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:36.773110 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-17 01:07:36.773119 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:36.773127 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-17 01:07:36.773135 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:36.773143 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-17 01:07:36.773151 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:36.773159 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-17 01:07:36.773167 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:36.773175 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-17 01:07:36.773184 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:36.773192 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-17 01:07:36.773200 | orchestrator | 2025-05-17 01:07:36.773208 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-17 01:07:36.773220 | orchestrator | Saturday 17 May 2025 01:05:00 +0000 (0:00:16.125) 0:01:19.780 ********** 2025-05-17 01:07:36.773229 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-17 01:07:36.773236 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:36.773244 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-17 01:07:36.773252 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:36.773260 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-17 01:07:36.773268 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:36.773276 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-17 01:07:36.773284 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:36.773291 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-17 01:07:36.773299 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:36.773307 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-17 01:07:36.773315 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:36.773323 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-17 01:07:36.773331 | orchestrator | 2025-05-17 01:07:36.773339 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-17 01:07:36.773349 | orchestrator | Saturday 17 May 2025 01:05:05 +0000 (0:00:04.472) 0:01:24.253 ********** 2025-05-17 01:07:36.773362 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-17 01:07:36.773375 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:36.773400 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-17 01:07:36.773420 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:36.773433 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-17 01:07:36.773445 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:36.773458 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-17 01:07:36.773470 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:36.773482 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-17 01:07:36.773493 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:36.773507 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-17 01:07:36.773520 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:36.773533 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-17 01:07:36.773547 | orchestrator | 2025-05-17 01:07:36.773560 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-17 01:07:36.773574 | orchestrator | Saturday 17 May 2025 01:05:08 +0000 (0:00:03.070) 0:01:27.323 ********** 2025-05-17 01:07:36.773587 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-17 01:07:36.773599 | orchestrator | 2025-05-17 01:07:36.773613 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-17 01:07:36.773622 | orchestrator | Saturday 17 May 2025 01:05:08 +0000 (0:00:00.467) 0:01:27.791 ********** 2025-05-17 01:07:36.773629 | orchestrator | skipping: [testbed-manager] 2025-05-17 01:07:36.773637 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:36.773645 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:36.773653 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:36.773661 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:36.773687 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:36.773701 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:36.773714 | orchestrator | 2025-05-17 01:07:36.773723 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-17 01:07:36.773731 | orchestrator | Saturday 17 May 2025 01:05:09 +0000 (0:00:01.063) 0:01:28.855 ********** 2025-05-17 01:07:36.773739 | orchestrator | skipping: [testbed-manager] 2025-05-17 01:07:36.773746 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:36.773754 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:36.773762 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:36.773769 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:07:36.773777 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:07:36.773785 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:07:36.773793 | orchestrator | 2025-05-17 01:07:36.773801 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-17 01:07:36.773808 | orchestrator | Saturday 17 May 2025 01:05:14 +0000 (0:00:04.575) 0:01:33.430 ********** 2025-05-17 01:07:36.773816 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-17 01:07:36.773824 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:36.773832 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-17 01:07:36.773840 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:36.773848 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-17 01:07:36.773855 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:36.773869 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-17 01:07:36.773884 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:36.773892 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-17 01:07:36.773899 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:36.773907 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-17 01:07:36.773915 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:36.773923 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-17 01:07:36.773931 | orchestrator | skipping: [testbed-manager] 2025-05-17 01:07:36.773939 | orchestrator | 2025-05-17 01:07:36.773946 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-17 01:07:36.773954 | orchestrator | Saturday 17 May 2025 01:05:18 +0000 (0:00:04.187) 0:01:37.618 ********** 2025-05-17 01:07:36.773962 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-17 01:07:36.773970 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:36.773978 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-17 01:07:36.773986 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:36.773994 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-17 01:07:36.774002 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:36.774009 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-17 01:07:36.774042 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:36.774051 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-17 01:07:36.774059 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:36.774067 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-17 01:07:36.774076 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:36.774084 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-17 01:07:36.774092 | orchestrator | 2025-05-17 01:07:36.774099 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-17 01:07:36.774107 | orchestrator | Saturday 17 May 2025 01:05:22 +0000 (0:00:04.399) 0:01:42.017 ********** 2025-05-17 01:07:36.774115 | orchestrator | [WARNING]: Skipped 2025-05-17 01:07:36.774123 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-17 01:07:36.774131 | orchestrator | due to this access issue: 2025-05-17 01:07:36.774139 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-17 01:07:36.774147 | orchestrator | not a directory 2025-05-17 01:07:36.774155 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-17 01:07:36.774163 | orchestrator | 2025-05-17 01:07:36.774171 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-17 01:07:36.774179 | orchestrator | Saturday 17 May 2025 01:05:24 +0000 (0:00:01.588) 0:01:43.605 ********** 2025-05-17 01:07:36.774187 | orchestrator | skipping: [testbed-manager] 2025-05-17 01:07:36.774194 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:36.774202 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:36.774210 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:36.774218 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:36.774226 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:36.774241 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:36.774249 | orchestrator | 2025-05-17 01:07:36.774257 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-17 01:07:36.774266 | orchestrator | Saturday 17 May 2025 01:05:25 +0000 (0:00:00.941) 0:01:44.547 ********** 2025-05-17 01:07:36.774282 | orchestrator | skipping: [testbed-manager] 2025-05-17 01:07:36.774290 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:36.774298 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:36.774306 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:36.774314 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:36.774322 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:36.774330 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:36.774338 | orchestrator | 2025-05-17 01:07:36.774346 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-05-17 01:07:36.774354 | orchestrator | Saturday 17 May 2025 01:05:26 +0000 (0:00:01.201) 0:01:45.748 ********** 2025-05-17 01:07:36.774362 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-17 01:07:36.774370 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:36.774378 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-17 01:07:36.774386 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:36.774394 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-17 01:07:36.774402 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:36.774410 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-17 01:07:36.774418 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:36.774426 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-17 01:07:36.774434 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:36.774444 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-17 01:07:36.774467 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:36.774489 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-17 01:07:36.774502 | orchestrator | skipping: [testbed-manager] 2025-05-17 01:07:36.774515 | orchestrator | 2025-05-17 01:07:36.774527 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-05-17 01:07:36.774539 | orchestrator | Saturday 17 May 2025 01:05:29 +0000 (0:00:02.898) 0:01:48.647 ********** 2025-05-17 01:07:36.774550 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-17 01:07:36.774562 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:36.774573 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-17 01:07:36.774584 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:36.774596 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-17 01:07:36.774608 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:36.774621 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-17 01:07:36.774633 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:36.774645 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-17 01:07:36.774657 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:36.774723 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-17 01:07:36.774739 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:36.774750 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-17 01:07:36.774763 | orchestrator | skipping: [testbed-manager] 2025-05-17 01:07:36.774776 | orchestrator | 2025-05-17 01:07:36.774787 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-17 01:07:36.774800 | orchestrator | Saturday 17 May 2025 01:05:32 +0000 (0:00:03.093) 0:01:51.740 ********** 2025-05-17 01:07:36.774826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.774852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.774866 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-17 01:07:36.774886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.774900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.774920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.774939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.774952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-17 01:07:36.774965 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.774983 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.774996 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.775030 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.775044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775079 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.775094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.775111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.775159 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-17 01:07:36.775172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775209 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.775227 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.775243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.775260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.775267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.775287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.775305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.775317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.775324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.775351 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.775362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.775373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.775380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.775412 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-17 01:07:36.775420 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.775432 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775439 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.775451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775459 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.775466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.775488 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.775495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.775502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775509 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.775520 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.775527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.775537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.775550 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.775564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.775575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-17 01:07:36.775583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.775598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-17 01:07:36.775606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.775618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-17 01:07:36.775625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.775632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.775654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.775668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.775708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-17 01:07:36.775722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-17 01:07:36.775777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-17 01:07:36.775785 | orchestrator | 2025-05-17 01:07:36.775792 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-17 01:07:36.775799 | orchestrator | Saturday 17 May 2025 01:05:37 +0000 (0:00:04.948) 0:01:56.689 ********** 2025-05-17 01:07:36.775806 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-17 01:07:36.775812 | orchestrator | 2025-05-17 01:07:36.775819 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-17 01:07:36.775826 | orchestrator | Saturday 17 May 2025 01:05:40 +0000 (0:00:03.147) 0:01:59.836 ********** 2025-05-17 01:07:36.775833 | orchestrator | 2025-05-17 01:07:36.775844 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-17 01:07:36.775856 | orchestrator | Saturday 17 May 2025 01:05:40 +0000 (0:00:00.091) 0:01:59.927 ********** 2025-05-17 01:07:36.775867 | orchestrator | 2025-05-17 01:07:36.775878 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-17 01:07:36.775888 | orchestrator | Saturday 17 May 2025 01:05:40 +0000 (0:00:00.287) 0:02:00.215 ********** 2025-05-17 01:07:36.775898 | orchestrator | 2025-05-17 01:07:36.775908 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-17 01:07:36.775919 | orchestrator | Saturday 17 May 2025 01:05:41 +0000 (0:00:00.058) 0:02:00.273 ********** 2025-05-17 01:07:36.775929 | orchestrator | 2025-05-17 01:07:36.775939 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-17 01:07:36.775949 | orchestrator | Saturday 17 May 2025 01:05:41 +0000 (0:00:00.054) 0:02:00.328 ********** 2025-05-17 01:07:36.775959 | orchestrator | 2025-05-17 01:07:36.775969 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-17 01:07:36.775980 | orchestrator | Saturday 17 May 2025 01:05:41 +0000 (0:00:00.052) 0:02:00.380 ********** 2025-05-17 01:07:36.775991 | orchestrator | 2025-05-17 01:07:36.776003 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-17 01:07:36.776014 | orchestrator | Saturday 17 May 2025 01:05:41 +0000 (0:00:00.244) 0:02:00.624 ********** 2025-05-17 01:07:36.776025 | orchestrator | 2025-05-17 01:07:36.776036 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-17 01:07:36.776048 | orchestrator | Saturday 17 May 2025 01:05:41 +0000 (0:00:00.071) 0:02:00.696 ********** 2025-05-17 01:07:36.776059 | orchestrator | changed: [testbed-manager] 2025-05-17 01:07:36.776072 | orchestrator | 2025-05-17 01:07:36.776079 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-17 01:07:36.776085 | orchestrator | Saturday 17 May 2025 01:06:00 +0000 (0:00:19.318) 0:02:20.015 ********** 2025-05-17 01:07:36.776099 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:07:36.776106 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:07:36.776119 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:07:36.776126 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:07:36.776132 | orchestrator | changed: [testbed-manager] 2025-05-17 01:07:36.776139 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:07:36.776146 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:07:36.776152 | orchestrator | 2025-05-17 01:07:36.776159 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-17 01:07:36.776166 | orchestrator | Saturday 17 May 2025 01:06:20 +0000 (0:00:19.935) 0:02:39.951 ********** 2025-05-17 01:07:36.776172 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:07:36.776178 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:07:36.776185 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:07:36.776192 | orchestrator | 2025-05-17 01:07:36.776198 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-17 01:07:36.776205 | orchestrator | Saturday 17 May 2025 01:06:32 +0000 (0:00:11.314) 0:02:51.266 ********** 2025-05-17 01:07:36.776211 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:07:36.776218 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:07:36.776224 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:07:36.776231 | orchestrator | 2025-05-17 01:07:36.776237 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-17 01:07:36.776244 | orchestrator | Saturday 17 May 2025 01:06:38 +0000 (0:00:06.670) 0:02:57.937 ********** 2025-05-17 01:07:36.776250 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:07:36.776257 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:07:36.776263 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:07:36.776270 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:07:36.776276 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:07:36.776283 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:07:36.776289 | orchestrator | changed: [testbed-manager] 2025-05-17 01:07:36.776296 | orchestrator | 2025-05-17 01:07:36.776302 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-17 01:07:36.776309 | orchestrator | Saturday 17 May 2025 01:06:56 +0000 (0:00:18.150) 0:03:16.087 ********** 2025-05-17 01:07:36.776316 | orchestrator | changed: [testbed-manager] 2025-05-17 01:07:36.776322 | orchestrator | 2025-05-17 01:07:36.776329 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-17 01:07:36.776340 | orchestrator | Saturday 17 May 2025 01:07:08 +0000 (0:00:11.223) 0:03:27.311 ********** 2025-05-17 01:07:36.776347 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:07:36.776353 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:07:36.776360 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:07:36.776370 | orchestrator | 2025-05-17 01:07:36.776381 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-17 01:07:36.776400 | orchestrator | Saturday 17 May 2025 01:07:15 +0000 (0:00:07.036) 0:03:34.348 ********** 2025-05-17 01:07:36.776412 | orchestrator | changed: [testbed-manager] 2025-05-17 01:07:36.776423 | orchestrator | 2025-05-17 01:07:36.776434 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-17 01:07:36.776445 | orchestrator | Saturday 17 May 2025 01:07:23 +0000 (0:00:08.111) 0:03:42.459 ********** 2025-05-17 01:07:36.776455 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:07:36.776464 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:07:36.776476 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:07:36.776487 | orchestrator | 2025-05-17 01:07:36.776498 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:07:36.776509 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-17 01:07:36.776522 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-17 01:07:36.776541 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-17 01:07:36.776554 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-17 01:07:36.776562 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-17 01:07:36.776568 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-17 01:07:36.776575 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-17 01:07:36.776582 | orchestrator | 2025-05-17 01:07:36.776588 | orchestrator | 2025-05-17 01:07:36.776595 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:07:36.776602 | orchestrator | Saturday 17 May 2025 01:07:35 +0000 (0:00:12.153) 0:03:54.613 ********** 2025-05-17 01:07:36.776609 | orchestrator | =============================================================================== 2025-05-17 01:07:36.776615 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 33.59s 2025-05-17 01:07:36.776622 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 19.94s 2025-05-17 01:07:36.776629 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 19.32s 2025-05-17 01:07:36.776635 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 18.15s 2025-05-17 01:07:36.776642 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.13s 2025-05-17 01:07:36.776649 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.15s 2025-05-17 01:07:36.776661 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 11.32s 2025-05-17 01:07:36.776668 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 11.22s 2025-05-17 01:07:36.776693 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 8.11s 2025-05-17 01:07:36.776700 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 7.04s 2025-05-17 01:07:36.776707 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.67s 2025-05-17 01:07:36.776714 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.52s 2025-05-17 01:07:36.776721 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.49s 2025-05-17 01:07:36.776728 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.95s 2025-05-17 01:07:36.776735 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 4.58s 2025-05-17 01:07:36.776741 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.52s 2025-05-17 01:07:36.776748 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.47s 2025-05-17 01:07:36.776755 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 4.40s 2025-05-17 01:07:36.776761 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 4.19s 2025-05-17 01:07:36.776768 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 3.15s 2025-05-17 01:07:36.776774 | orchestrator | 2025-05-17 01:07:36 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:36.776781 | orchestrator | 2025-05-17 01:07:36 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:36.776788 | orchestrator | 2025-05-17 01:07:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:39.802486 | orchestrator | 2025-05-17 01:07:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:39.802623 | orchestrator | 2025-05-17 01:07:39 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:07:39.803649 | orchestrator | 2025-05-17 01:07:39 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:07:39.804483 | orchestrator | 2025-05-17 01:07:39 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:39.807735 | orchestrator | 2025-05-17 01:07:39 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:39.807774 | orchestrator | 2025-05-17 01:07:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:42.853635 | orchestrator | 2025-05-17 01:07:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:42.855612 | orchestrator | 2025-05-17 01:07:42 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:07:42.858265 | orchestrator | 2025-05-17 01:07:42 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:07:42.860126 | orchestrator | 2025-05-17 01:07:42 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:42.862163 | orchestrator | 2025-05-17 01:07:42 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:42.862213 | orchestrator | 2025-05-17 01:07:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:45.912025 | orchestrator | 2025-05-17 01:07:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:45.913281 | orchestrator | 2025-05-17 01:07:45 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:07:45.915274 | orchestrator | 2025-05-17 01:07:45 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:07:45.916270 | orchestrator | 2025-05-17 01:07:45 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:45.918278 | orchestrator | 2025-05-17 01:07:45 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:45.918330 | orchestrator | 2025-05-17 01:07:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:48.969457 | orchestrator | 2025-05-17 01:07:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:48.970301 | orchestrator | 2025-05-17 01:07:48 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state STARTED 2025-05-17 01:07:48.971574 | orchestrator | 2025-05-17 01:07:48 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:07:48.973719 | orchestrator | 2025-05-17 01:07:48 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:48.974579 | orchestrator | 2025-05-17 01:07:48 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:48.974616 | orchestrator | 2025-05-17 01:07:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:52.018084 | orchestrator | 2025-05-17 01:07:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:52.018194 | orchestrator | 2025-05-17 01:07:52 | INFO  | Task cecb7bd5-0ff1-4cb9-a25b-df8e64d87654 is in state SUCCESS 2025-05-17 01:07:52.019752 | orchestrator | 2025-05-17 01:07:52.022218 | orchestrator | 2025-05-17 01:07:52.022280 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 01:07:52.022303 | orchestrator | 2025-05-17 01:07:52.022322 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 01:07:52.022343 | orchestrator | Saturday 17 May 2025 01:05:04 +0000 (0:00:00.201) 0:00:00.201 ********** 2025-05-17 01:07:52.022394 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:07:52.022416 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:07:52.022434 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:07:52.022453 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:07:52.022748 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:07:52.022776 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:07:52.022796 | orchestrator | 2025-05-17 01:07:52.022815 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 01:07:52.022835 | orchestrator | Saturday 17 May 2025 01:05:04 +0000 (0:00:00.446) 0:00:00.647 ********** 2025-05-17 01:07:52.022854 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-17 01:07:52.022874 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-17 01:07:52.022893 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-17 01:07:52.022913 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-17 01:07:52.022931 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-17 01:07:52.022950 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-17 01:07:52.022969 | orchestrator | 2025-05-17 01:07:52.022988 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-17 01:07:52.023008 | orchestrator | 2025-05-17 01:07:52.023027 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-17 01:07:52.023064 | orchestrator | Saturday 17 May 2025 01:05:05 +0000 (0:00:00.658) 0:00:01.306 ********** 2025-05-17 01:07:52.023085 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 01:07:52.023105 | orchestrator | 2025-05-17 01:07:52.023124 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-17 01:07:52.023143 | orchestrator | Saturday 17 May 2025 01:05:06 +0000 (0:00:01.379) 0:00:02.687 ********** 2025-05-17 01:07:52.023163 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-17 01:07:52.023182 | orchestrator | 2025-05-17 01:07:52.023201 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-17 01:07:52.023220 | orchestrator | Saturday 17 May 2025 01:05:10 +0000 (0:00:03.565) 0:00:06.253 ********** 2025-05-17 01:07:52.023239 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-17 01:07:52.023259 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-17 01:07:52.023276 | orchestrator | 2025-05-17 01:07:52.023293 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-17 01:07:52.023312 | orchestrator | Saturday 17 May 2025 01:05:17 +0000 (0:00:07.003) 0:00:13.256 ********** 2025-05-17 01:07:52.023331 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-17 01:07:52.023350 | orchestrator | 2025-05-17 01:07:52.023369 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-17 01:07:52.023387 | orchestrator | Saturday 17 May 2025 01:05:20 +0000 (0:00:03.774) 0:00:17.031 ********** 2025-05-17 01:07:52.023406 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-17 01:07:52.023425 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-17 01:07:52.023442 | orchestrator | 2025-05-17 01:07:52.023459 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-17 01:07:52.023476 | orchestrator | Saturday 17 May 2025 01:05:24 +0000 (0:00:03.703) 0:00:20.734 ********** 2025-05-17 01:07:52.023493 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-17 01:07:52.023509 | orchestrator | 2025-05-17 01:07:52.023526 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-17 01:07:52.023543 | orchestrator | Saturday 17 May 2025 01:05:28 +0000 (0:00:03.471) 0:00:24.205 ********** 2025-05-17 01:07:52.023560 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-17 01:07:52.023591 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-17 01:07:52.023607 | orchestrator | 2025-05-17 01:07:52.023625 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-17 01:07:52.023642 | orchestrator | Saturday 17 May 2025 01:05:36 +0000 (0:00:08.526) 0:00:32.732 ********** 2025-05-17 01:07:52.023750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 01:07:52.023774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 01:07:52.023798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.023817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.023836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.023864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.023924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 01:07:52.023950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.023968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.023987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.024015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.024073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.024093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.024117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.024135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.024162 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.024220 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.024239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.024262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.024281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.024307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.024323 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.024379 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.024405 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.024421 | orchestrator | 2025-05-17 01:07:52.024438 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-17 01:07:52.024456 | orchestrator | Saturday 17 May 2025 01:05:38 +0000 (0:00:01.955) 0:00:34.687 ********** 2025-05-17 01:07:52.024473 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:52.024492 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:52.024509 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:52.024526 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 01:07:52.024543 | orchestrator | 2025-05-17 01:07:52.024560 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-17 01:07:52.024577 | orchestrator | Saturday 17 May 2025 01:05:39 +0000 (0:00:00.980) 0:00:35.667 ********** 2025-05-17 01:07:52.024604 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-17 01:07:52.024622 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-17 01:07:52.024639 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-17 01:07:52.024656 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-17 01:07:52.024697 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-17 01:07:52.024714 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-17 01:07:52.024732 | orchestrator | 2025-05-17 01:07:52.024749 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-17 01:07:52.024766 | orchestrator | Saturday 17 May 2025 01:05:42 +0000 (0:00:02.747) 0:00:38.415 ********** 2025-05-17 01:07:52.024785 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-17 01:07:52.024805 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-17 01:07:52.024868 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-17 01:07:52.024894 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-17 01:07:52.024931 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-17 01:07:52.024949 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-17 01:07:52.024967 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-17 01:07:52.025044 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-17 01:07:52.025069 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-17 01:07:52.025096 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-17 01:07:52.025114 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-17 01:07:52.025169 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-17 01:07:52.025188 | orchestrator | 2025-05-17 01:07:52.025205 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-17 01:07:52.025222 | orchestrator | Saturday 17 May 2025 01:05:45 +0000 (0:00:03.430) 0:00:41.846 ********** 2025-05-17 01:07:52.025239 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-17 01:07:52.025256 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-17 01:07:52.025273 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-17 01:07:52.025290 | orchestrator | 2025-05-17 01:07:52.025307 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-17 01:07:52.025323 | orchestrator | Saturday 17 May 2025 01:05:47 +0000 (0:00:01.700) 0:00:43.546 ********** 2025-05-17 01:07:52.025340 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-17 01:07:52.025357 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-17 01:07:52.025374 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-17 01:07:52.025399 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-17 01:07:52.025416 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-17 01:07:52.025433 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-17 01:07:52.025451 | orchestrator | 2025-05-17 01:07:52.025474 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-17 01:07:52.025491 | orchestrator | Saturday 17 May 2025 01:05:50 +0000 (0:00:02.852) 0:00:46.399 ********** 2025-05-17 01:07:52.025507 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-17 01:07:52.025524 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-17 01:07:52.025541 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-17 01:07:52.025558 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-17 01:07:52.025575 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-17 01:07:52.025592 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-17 01:07:52.025609 | orchestrator | 2025-05-17 01:07:52.025626 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-17 01:07:52.025643 | orchestrator | Saturday 17 May 2025 01:05:51 +0000 (0:00:01.170) 0:00:47.570 ********** 2025-05-17 01:07:52.025687 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:52.025706 | orchestrator | 2025-05-17 01:07:52.025723 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-17 01:07:52.025740 | orchestrator | Saturday 17 May 2025 01:05:51 +0000 (0:00:00.117) 0:00:47.688 ********** 2025-05-17 01:07:52.025757 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:52.025774 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:52.025791 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:52.025808 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:52.025825 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:52.025842 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:52.025859 | orchestrator | 2025-05-17 01:07:52.025876 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-17 01:07:52.025893 | orchestrator | Saturday 17 May 2025 01:05:52 +0000 (0:00:00.856) 0:00:48.544 ********** 2025-05-17 01:07:52.025911 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 01:07:52.025929 | orchestrator | 2025-05-17 01:07:52.025947 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-17 01:07:52.025963 | orchestrator | Saturday 17 May 2025 01:05:53 +0000 (0:00:01.332) 0:00:49.876 ********** 2025-05-17 01:07:52.025982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 01:07:52.026098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 01:07:52.026138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 01:07:52.026157 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.026175 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.026193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.026250 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.026276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.026299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.026316 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.026333 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.026350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.026375 | orchestrator | 2025-05-17 01:07:52.026391 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-17 01:07:52.026408 | orchestrator | Saturday 17 May 2025 01:05:56 +0000 (0:00:02.999) 0:00:52.875 ********** 2025-05-17 01:07:52.026463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.026488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.026507 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:52.026525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.026543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.026560 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:52.026578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.026645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.026714 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:52.026739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.026758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.026775 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:52.026794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.026812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.026838 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:52.026900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.026919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.026934 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:52.026948 | orchestrator | 2025-05-17 01:07:52.026967 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-17 01:07:52.026981 | orchestrator | Saturday 17 May 2025 01:05:58 +0000 (0:00:01.545) 0:00:54.421 ********** 2025-05-17 01:07:52.026996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.027011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.027084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027100 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:52.027114 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:52.027133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.027148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027162 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:52.027176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027214 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:52.027259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027309 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:52.027323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027346 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:52.027360 | orchestrator | 2025-05-17 01:07:52.027374 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-17 01:07:52.027388 | orchestrator | Saturday 17 May 2025 01:06:00 +0000 (0:00:01.944) 0:00:56.365 ********** 2025-05-17 01:07:52.027403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 01:07:52.027450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.027466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 01:07:52.027499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.027522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 01:07:52.027590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.027604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.027640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027724 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.027746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.027761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027798 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.027844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.027863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.027900 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.027915 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.027958 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.027974 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.027988 | orchestrator | 2025-05-17 01:07:52.028002 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-17 01:07:52.028016 | orchestrator | Saturday 17 May 2025 01:06:04 +0000 (0:00:04.458) 0:01:00.824 ********** 2025-05-17 01:07:52.028030 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-17 01:07:52.028044 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:52.028063 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-17 01:07:52.028078 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:52.028092 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-17 01:07:52.028106 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:52.028127 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-17 01:07:52.028141 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-17 01:07:52.028154 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-17 01:07:52.028169 | orchestrator | 2025-05-17 01:07:52.028182 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-17 01:07:52.028196 | orchestrator | Saturday 17 May 2025 01:06:08 +0000 (0:00:04.090) 0:01:04.915 ********** 2025-05-17 01:07:52.028211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.028225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.028246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.028261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.028285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.028307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.028320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 01:07:52.028334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 01:07:52.028357 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.028375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 01:07:52.028400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.028414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.028428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.028450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.028485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.028500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.028514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.028529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.028551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.028566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.028593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.028607 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.028622 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.028637 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.028651 | orchestrator | 2025-05-17 01:07:52.028691 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-17 01:07:52.028706 | orchestrator | Saturday 17 May 2025 01:06:19 +0000 (0:00:10.511) 0:01:15.426 ********** 2025-05-17 01:07:52.028720 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:52.028734 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:52.028748 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:52.028762 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:07:52.028776 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:07:52.028790 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:07:52.028811 | orchestrator | 2025-05-17 01:07:52.028825 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-17 01:07:52.028839 | orchestrator | Saturday 17 May 2025 01:06:21 +0000 (0:00:02.205) 0:01:17.631 ********** 2025-05-17 01:07:52.028858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.028873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.028888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.028903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.028925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.028947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.028966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.028981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.028996 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:52.029010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.029031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029087 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:52.029101 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:52.029115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.029130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029188 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:52.029212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.029227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029278 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:52.029300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.029320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029363 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:52.029377 | orchestrator | 2025-05-17 01:07:52.029391 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-17 01:07:52.029405 | orchestrator | Saturday 17 May 2025 01:06:22 +0000 (0:00:01.381) 0:01:19.013 ********** 2025-05-17 01:07:52.029419 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:52.029432 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:52.029446 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:52.029461 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:52.029482 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:52.029497 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:52.029511 | orchestrator | 2025-05-17 01:07:52.029525 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-17 01:07:52.029539 | orchestrator | Saturday 17 May 2025 01:06:23 +0000 (0:00:00.692) 0:01:19.705 ********** 2025-05-17 01:07:52.029560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.029575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.029609 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-17 01:07:52.029647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 01:07:52.029751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 01:07:52.029767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-17 01:07:52.029782 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.029812 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.029826 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.029844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.029859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.029918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.029965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.029987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-17 01:07:52.030009 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.030061 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.030078 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-17 01:07:52.030086 | orchestrator | 2025-05-17 01:07:52.030095 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-17 01:07:52.030103 | orchestrator | Saturday 17 May 2025 01:06:26 +0000 (0:00:02.435) 0:01:22.140 ********** 2025-05-17 01:07:52.030111 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:52.030120 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:07:52.030128 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:07:52.030136 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:07:52.030143 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:07:52.030150 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:07:52.030157 | orchestrator | 2025-05-17 01:07:52.030163 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-17 01:07:52.030170 | orchestrator | Saturday 17 May 2025 01:06:26 +0000 (0:00:00.550) 0:01:22.691 ********** 2025-05-17 01:07:52.030183 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:07:52.030190 | orchestrator | 2025-05-17 01:07:52.030196 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-17 01:07:52.030203 | orchestrator | Saturday 17 May 2025 01:06:28 +0000 (0:00:02.272) 0:01:24.964 ********** 2025-05-17 01:07:52.030209 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:07:52.030219 | orchestrator | 2025-05-17 01:07:52.030231 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-17 01:07:52.030241 | orchestrator | Saturday 17 May 2025 01:06:31 +0000 (0:00:02.237) 0:01:27.202 ********** 2025-05-17 01:07:52.030253 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:07:52.030268 | orchestrator | 2025-05-17 01:07:52.030283 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-17 01:07:52.030295 | orchestrator | Saturday 17 May 2025 01:06:47 +0000 (0:00:16.740) 0:01:43.942 ********** 2025-05-17 01:07:52.030308 | orchestrator | 2025-05-17 01:07:52.030318 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-17 01:07:52.030330 | orchestrator | Saturday 17 May 2025 01:06:47 +0000 (0:00:00.063) 0:01:44.006 ********** 2025-05-17 01:07:52.030337 | orchestrator | 2025-05-17 01:07:52.030344 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-17 01:07:52.030350 | orchestrator | Saturday 17 May 2025 01:06:48 +0000 (0:00:00.160) 0:01:44.167 ********** 2025-05-17 01:07:52.030357 | orchestrator | 2025-05-17 01:07:52.030364 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-17 01:07:52.030370 | orchestrator | Saturday 17 May 2025 01:06:48 +0000 (0:00:00.048) 0:01:44.215 ********** 2025-05-17 01:07:52.030377 | orchestrator | 2025-05-17 01:07:52.030384 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-17 01:07:52.030390 | orchestrator | Saturday 17 May 2025 01:06:48 +0000 (0:00:00.047) 0:01:44.262 ********** 2025-05-17 01:07:52.030397 | orchestrator | 2025-05-17 01:07:52.030403 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-17 01:07:52.030410 | orchestrator | Saturday 17 May 2025 01:06:48 +0000 (0:00:00.047) 0:01:44.309 ********** 2025-05-17 01:07:52.030417 | orchestrator | 2025-05-17 01:07:52.030423 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-17 01:07:52.030430 | orchestrator | Saturday 17 May 2025 01:06:48 +0000 (0:00:00.153) 0:01:44.463 ********** 2025-05-17 01:07:52.030436 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:07:52.030443 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:07:52.030450 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:07:52.030457 | orchestrator | 2025-05-17 01:07:52.030463 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-17 01:07:52.030470 | orchestrator | Saturday 17 May 2025 01:07:06 +0000 (0:00:17.800) 0:02:02.263 ********** 2025-05-17 01:07:52.030476 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:07:52.030483 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:07:52.030490 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:07:52.030496 | orchestrator | 2025-05-17 01:07:52.030503 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-17 01:07:52.030516 | orchestrator | Saturday 17 May 2025 01:07:13 +0000 (0:00:06.897) 0:02:09.161 ********** 2025-05-17 01:07:52.030523 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:07:52.030530 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:07:52.030536 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:07:52.030543 | orchestrator | 2025-05-17 01:07:52.030550 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-17 01:07:52.030556 | orchestrator | Saturday 17 May 2025 01:07:37 +0000 (0:00:24.582) 0:02:33.743 ********** 2025-05-17 01:07:52.030563 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:07:52.030570 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:07:52.030576 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:07:52.030583 | orchestrator | 2025-05-17 01:07:52.030590 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-17 01:07:52.030604 | orchestrator | Saturday 17 May 2025 01:07:49 +0000 (0:00:11.527) 0:02:45.270 ********** 2025-05-17 01:07:52.030610 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:07:52.030617 | orchestrator | 2025-05-17 01:07:52.030624 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:07:52.030631 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-17 01:07:52.030638 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-17 01:07:52.030652 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-17 01:07:52.030659 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-17 01:07:52.030690 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-17 01:07:52.030696 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-17 01:07:52.030703 | orchestrator | 2025-05-17 01:07:52.030710 | orchestrator | 2025-05-17 01:07:52.030717 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:07:52.030724 | orchestrator | Saturday 17 May 2025 01:07:49 +0000 (0:00:00.556) 0:02:45.827 ********** 2025-05-17 01:07:52.030730 | orchestrator | =============================================================================== 2025-05-17 01:07:52.030737 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 24.58s 2025-05-17 01:07:52.030743 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 17.80s 2025-05-17 01:07:52.030750 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 16.74s 2025-05-17 01:07:52.030756 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.53s 2025-05-17 01:07:52.030763 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.51s 2025-05-17 01:07:52.030770 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.53s 2025-05-17 01:07:52.030776 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.00s 2025-05-17 01:07:52.030783 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.90s 2025-05-17 01:07:52.030790 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.46s 2025-05-17 01:07:52.030796 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 4.09s 2025-05-17 01:07:52.030803 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.77s 2025-05-17 01:07:52.030809 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.70s 2025-05-17 01:07:52.030816 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.57s 2025-05-17 01:07:52.030823 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.47s 2025-05-17 01:07:52.030829 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.43s 2025-05-17 01:07:52.030836 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.00s 2025-05-17 01:07:52.030842 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.85s 2025-05-17 01:07:52.030849 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.75s 2025-05-17 01:07:52.030856 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.44s 2025-05-17 01:07:52.030862 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.27s 2025-05-17 01:07:52.030874 | orchestrator | 2025-05-17 01:07:52 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:07:52.030881 | orchestrator | 2025-05-17 01:07:52 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:52.030888 | orchestrator | 2025-05-17 01:07:52 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:07:52.030895 | orchestrator | 2025-05-17 01:07:52 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:52.030911 | orchestrator | 2025-05-17 01:07:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:55.081015 | orchestrator | 2025-05-17 01:07:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:55.082179 | orchestrator | 2025-05-17 01:07:55 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:07:55.083236 | orchestrator | 2025-05-17 01:07:55 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:55.085324 | orchestrator | 2025-05-17 01:07:55 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:07:55.085990 | orchestrator | 2025-05-17 01:07:55 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:55.086189 | orchestrator | 2025-05-17 01:07:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:07:58.131222 | orchestrator | 2025-05-17 01:07:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:07:58.133799 | orchestrator | 2025-05-17 01:07:58 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:07:58.136207 | orchestrator | 2025-05-17 01:07:58 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:07:58.137950 | orchestrator | 2025-05-17 01:07:58 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:07:58.139500 | orchestrator | 2025-05-17 01:07:58 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:07:58.140122 | orchestrator | 2025-05-17 01:07:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:01.181593 | orchestrator | 2025-05-17 01:08:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:01.183730 | orchestrator | 2025-05-17 01:08:01 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:08:01.186176 | orchestrator | 2025-05-17 01:08:01 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:01.188282 | orchestrator | 2025-05-17 01:08:01 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:01.191926 | orchestrator | 2025-05-17 01:08:01 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:01.191972 | orchestrator | 2025-05-17 01:08:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:04.234289 | orchestrator | 2025-05-17 01:08:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:04.234593 | orchestrator | 2025-05-17 01:08:04 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:08:04.235791 | orchestrator | 2025-05-17 01:08:04 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:04.237534 | orchestrator | 2025-05-17 01:08:04 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:04.237795 | orchestrator | 2025-05-17 01:08:04 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:04.237811 | orchestrator | 2025-05-17 01:08:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:07.287192 | orchestrator | 2025-05-17 01:08:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:07.289549 | orchestrator | 2025-05-17 01:08:07 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:08:07.291509 | orchestrator | 2025-05-17 01:08:07 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:07.294690 | orchestrator | 2025-05-17 01:08:07 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:07.297556 | orchestrator | 2025-05-17 01:08:07 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:07.297692 | orchestrator | 2025-05-17 01:08:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:10.361376 | orchestrator | 2025-05-17 01:08:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:10.363946 | orchestrator | 2025-05-17 01:08:10 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:08:10.367310 | orchestrator | 2025-05-17 01:08:10 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:10.369749 | orchestrator | 2025-05-17 01:08:10 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:10.371446 | orchestrator | 2025-05-17 01:08:10 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:10.371502 | orchestrator | 2025-05-17 01:08:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:13.418230 | orchestrator | 2025-05-17 01:08:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:13.418348 | orchestrator | 2025-05-17 01:08:13 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:08:13.419632 | orchestrator | 2025-05-17 01:08:13 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:13.421343 | orchestrator | 2025-05-17 01:08:13 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:13.422235 | orchestrator | 2025-05-17 01:08:13 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:13.422281 | orchestrator | 2025-05-17 01:08:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:16.463858 | orchestrator | 2025-05-17 01:08:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:16.465552 | orchestrator | 2025-05-17 01:08:16 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:08:16.467533 | orchestrator | 2025-05-17 01:08:16 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:16.471704 | orchestrator | 2025-05-17 01:08:16 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:16.474346 | orchestrator | 2025-05-17 01:08:16 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:16.474397 | orchestrator | 2025-05-17 01:08:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:19.519495 | orchestrator | 2025-05-17 01:08:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:19.521182 | orchestrator | 2025-05-17 01:08:19 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:08:19.522349 | orchestrator | 2025-05-17 01:08:19 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:19.525432 | orchestrator | 2025-05-17 01:08:19 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:19.526866 | orchestrator | 2025-05-17 01:08:19 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:19.526899 | orchestrator | 2025-05-17 01:08:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:22.577542 | orchestrator | 2025-05-17 01:08:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:22.579250 | orchestrator | 2025-05-17 01:08:22 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:08:22.582978 | orchestrator | 2025-05-17 01:08:22 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:22.584840 | orchestrator | 2025-05-17 01:08:22 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:22.586551 | orchestrator | 2025-05-17 01:08:22 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:22.587375 | orchestrator | 2025-05-17 01:08:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:25.635270 | orchestrator | 2025-05-17 01:08:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:25.636363 | orchestrator | 2025-05-17 01:08:25 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:08:25.636417 | orchestrator | 2025-05-17 01:08:25 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:25.636427 | orchestrator | 2025-05-17 01:08:25 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:25.637821 | orchestrator | 2025-05-17 01:08:25 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:25.637859 | orchestrator | 2025-05-17 01:08:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:28.687069 | orchestrator | 2025-05-17 01:08:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:28.689167 | orchestrator | 2025-05-17 01:08:28 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:08:28.690846 | orchestrator | 2025-05-17 01:08:28 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:28.693450 | orchestrator | 2025-05-17 01:08:28 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:28.694973 | orchestrator | 2025-05-17 01:08:28 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:28.695029 | orchestrator | 2025-05-17 01:08:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:31.740101 | orchestrator | 2025-05-17 01:08:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:31.740830 | orchestrator | 2025-05-17 01:08:31 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state STARTED 2025-05-17 01:08:31.742286 | orchestrator | 2025-05-17 01:08:31 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:31.743897 | orchestrator | 2025-05-17 01:08:31 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:31.748806 | orchestrator | 2025-05-17 01:08:31 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:31.748843 | orchestrator | 2025-05-17 01:08:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:34.802434 | orchestrator | 2025-05-17 01:08:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:34.802548 | orchestrator | 2025-05-17 01:08:34 | INFO  | Task a9cb93d0-e7ff-4262-b229-8c26f973f381 is in state SUCCESS 2025-05-17 01:08:34.804692 | orchestrator | 2025-05-17 01:08:34 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:34.806944 | orchestrator | 2025-05-17 01:08:34 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:34.808038 | orchestrator | 2025-05-17 01:08:34 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:34.808100 | orchestrator | 2025-05-17 01:08:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:37.853251 | orchestrator | 2025-05-17 01:08:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:37.854137 | orchestrator | 2025-05-17 01:08:37 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:37.855540 | orchestrator | 2025-05-17 01:08:37 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:37.857199 | orchestrator | 2025-05-17 01:08:37 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:37.857258 | orchestrator | 2025-05-17 01:08:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:40.894346 | orchestrator | 2025-05-17 01:08:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:40.895907 | orchestrator | 2025-05-17 01:08:40 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:40.898358 | orchestrator | 2025-05-17 01:08:40 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:40.900520 | orchestrator | 2025-05-17 01:08:40 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:40.900557 | orchestrator | 2025-05-17 01:08:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:43.944312 | orchestrator | 2025-05-17 01:08:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:43.946064 | orchestrator | 2025-05-17 01:08:43 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:43.947366 | orchestrator | 2025-05-17 01:08:43 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:43.950725 | orchestrator | 2025-05-17 01:08:43 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:43.950781 | orchestrator | 2025-05-17 01:08:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:46.987141 | orchestrator | 2025-05-17 01:08:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:46.988936 | orchestrator | 2025-05-17 01:08:46 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:46.990473 | orchestrator | 2025-05-17 01:08:46 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:46.991952 | orchestrator | 2025-05-17 01:08:46 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:46.992083 | orchestrator | 2025-05-17 01:08:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:50.040898 | orchestrator | 2025-05-17 01:08:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:50.041383 | orchestrator | 2025-05-17 01:08:50 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:50.043426 | orchestrator | 2025-05-17 01:08:50 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:50.044931 | orchestrator | 2025-05-17 01:08:50 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:50.045010 | orchestrator | 2025-05-17 01:08:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:53.113791 | orchestrator | 2025-05-17 01:08:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:53.113927 | orchestrator | 2025-05-17 01:08:53 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:53.114232 | orchestrator | 2025-05-17 01:08:53 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:53.116224 | orchestrator | 2025-05-17 01:08:53 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:53.116263 | orchestrator | 2025-05-17 01:08:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:56.170425 | orchestrator | 2025-05-17 01:08:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:56.171935 | orchestrator | 2025-05-17 01:08:56 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:56.173603 | orchestrator | 2025-05-17 01:08:56 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:56.175481 | orchestrator | 2025-05-17 01:08:56 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:56.175652 | orchestrator | 2025-05-17 01:08:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:08:59.225597 | orchestrator | 2025-05-17 01:08:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:08:59.227290 | orchestrator | 2025-05-17 01:08:59 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:08:59.228534 | orchestrator | 2025-05-17 01:08:59 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:08:59.231900 | orchestrator | 2025-05-17 01:08:59 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:08:59.231948 | orchestrator | 2025-05-17 01:08:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:02.282242 | orchestrator | 2025-05-17 01:09:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:02.286246 | orchestrator | 2025-05-17 01:09:02 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:02.287985 | orchestrator | 2025-05-17 01:09:02 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:09:02.289290 | orchestrator | 2025-05-17 01:09:02 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:09:02.289675 | orchestrator | 2025-05-17 01:09:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:05.337051 | orchestrator | 2025-05-17 01:09:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:05.338631 | orchestrator | 2025-05-17 01:09:05 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:05.339865 | orchestrator | 2025-05-17 01:09:05 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:09:05.341713 | orchestrator | 2025-05-17 01:09:05 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:09:05.341946 | orchestrator | 2025-05-17 01:09:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:08.386326 | orchestrator | 2025-05-17 01:09:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:08.387245 | orchestrator | 2025-05-17 01:09:08 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:08.388513 | orchestrator | 2025-05-17 01:09:08 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:09:08.389736 | orchestrator | 2025-05-17 01:09:08 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:09:08.389763 | orchestrator | 2025-05-17 01:09:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:11.436692 | orchestrator | 2025-05-17 01:09:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:11.437701 | orchestrator | 2025-05-17 01:09:11 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:11.439519 | orchestrator | 2025-05-17 01:09:11 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:09:11.440536 | orchestrator | 2025-05-17 01:09:11 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:09:11.440580 | orchestrator | 2025-05-17 01:09:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:14.488766 | orchestrator | 2025-05-17 01:09:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:14.489707 | orchestrator | 2025-05-17 01:09:14 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:14.491288 | orchestrator | 2025-05-17 01:09:14 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:09:14.492763 | orchestrator | 2025-05-17 01:09:14 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:09:14.492836 | orchestrator | 2025-05-17 01:09:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:17.550685 | orchestrator | 2025-05-17 01:09:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:17.551958 | orchestrator | 2025-05-17 01:09:17 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:17.554279 | orchestrator | 2025-05-17 01:09:17 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:09:17.556007 | orchestrator | 2025-05-17 01:09:17 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:09:17.556039 | orchestrator | 2025-05-17 01:09:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:20.608524 | orchestrator | 2025-05-17 01:09:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:20.609963 | orchestrator | 2025-05-17 01:09:20 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:20.611406 | orchestrator | 2025-05-17 01:09:20 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:09:20.612311 | orchestrator | 2025-05-17 01:09:20 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:09:20.612334 | orchestrator | 2025-05-17 01:09:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:23.662263 | orchestrator | 2025-05-17 01:09:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:23.663454 | orchestrator | 2025-05-17 01:09:23 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:23.664872 | orchestrator | 2025-05-17 01:09:23 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:09:23.666206 | orchestrator | 2025-05-17 01:09:23 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state STARTED 2025-05-17 01:09:23.666241 | orchestrator | 2025-05-17 01:09:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:26.714284 | orchestrator | 2025-05-17 01:09:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:26.714448 | orchestrator | 2025-05-17 01:09:26 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:26.714469 | orchestrator | 2025-05-17 01:09:26 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:09:26.715819 | orchestrator | 2025-05-17 01:09:26 | INFO  | Task 583a390d-3801-4acb-b27c-8b6d71c05170 is in state SUCCESS 2025-05-17 01:09:26.715882 | orchestrator | 2025-05-17 01:09:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:29.765703 | orchestrator | 2025-05-17 01:09:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:29.766720 | orchestrator | 2025-05-17 01:09:29 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:29.767960 | orchestrator | 2025-05-17 01:09:29 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:09:29.768068 | orchestrator | 2025-05-17 01:09:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:32.817939 | orchestrator | 2025-05-17 01:09:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:32.819012 | orchestrator | 2025-05-17 01:09:32 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:32.820657 | orchestrator | 2025-05-17 01:09:32 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:09:32.821275 | orchestrator | 2025-05-17 01:09:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:35.882427 | orchestrator | 2025-05-17 01:09:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:35.882917 | orchestrator | 2025-05-17 01:09:35 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:35.884444 | orchestrator | 2025-05-17 01:09:35 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state STARTED 2025-05-17 01:09:35.884487 | orchestrator | 2025-05-17 01:09:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:38.928658 | orchestrator | 2025-05-17 01:09:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:38.931338 | orchestrator | 2025-05-17 01:09:38.931378 | orchestrator | 2025-05-17 01:09:38.931387 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 01:09:38.931394 | orchestrator | 2025-05-17 01:09:38.931401 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 01:09:38.931447 | orchestrator | Saturday 17 May 2025 01:07:38 +0000 (0:00:00.336) 0:00:00.336 ********** 2025-05-17 01:09:38.931455 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:09:38.931464 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:09:38.931470 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:09:38.931477 | orchestrator | 2025-05-17 01:09:38.931483 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 01:09:38.931490 | orchestrator | Saturday 17 May 2025 01:07:39 +0000 (0:00:00.444) 0:00:00.780 ********** 2025-05-17 01:09:38.931496 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-17 01:09:38.931568 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-17 01:09:38.931604 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-17 01:09:38.931615 | orchestrator | 2025-05-17 01:09:38.931626 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-17 01:09:38.931636 | orchestrator | 2025-05-17 01:09:38.931646 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-17 01:09:38.931655 | orchestrator | Saturday 17 May 2025 01:07:39 +0000 (0:00:00.415) 0:00:01.195 ********** 2025-05-17 01:09:38.931665 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:09:38.931702 | orchestrator | 2025-05-17 01:09:38.931712 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-17 01:09:38.931722 | orchestrator | Saturday 17 May 2025 01:07:40 +0000 (0:00:01.317) 0:00:02.512 ********** 2025-05-17 01:09:38.931732 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-17 01:09:38.931743 | orchestrator | 2025-05-17 01:09:38.931753 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-17 01:09:38.931763 | orchestrator | Saturday 17 May 2025 01:07:44 +0000 (0:00:03.328) 0:00:05.841 ********** 2025-05-17 01:09:38.931856 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-17 01:09:38.931867 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-17 01:09:38.931874 | orchestrator | 2025-05-17 01:09:38.931880 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-17 01:09:38.931886 | orchestrator | Saturday 17 May 2025 01:07:51 +0000 (0:00:06.847) 0:00:12.689 ********** 2025-05-17 01:09:38.931892 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-17 01:09:38.931899 | orchestrator | 2025-05-17 01:09:38.931905 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-17 01:09:38.931911 | orchestrator | Saturday 17 May 2025 01:07:54 +0000 (0:00:03.333) 0:00:16.022 ********** 2025-05-17 01:09:38.931917 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-17 01:09:38.931923 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-17 01:09:38.931930 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-17 01:09:38.931937 | orchestrator | 2025-05-17 01:09:38.931945 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-17 01:09:38.931954 | orchestrator | Saturday 17 May 2025 01:08:02 +0000 (0:00:07.792) 0:00:23.815 ********** 2025-05-17 01:09:38.931965 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-17 01:09:38.931976 | orchestrator | 2025-05-17 01:09:38.931987 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-17 01:09:38.931997 | orchestrator | Saturday 17 May 2025 01:08:05 +0000 (0:00:03.109) 0:00:26.924 ********** 2025-05-17 01:09:38.932007 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-17 01:09:38.932017 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-17 01:09:38.932027 | orchestrator | 2025-05-17 01:09:38.932036 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-17 01:09:38.932046 | orchestrator | Saturday 17 May 2025 01:08:12 +0000 (0:00:07.462) 0:00:34.387 ********** 2025-05-17 01:09:38.932057 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-17 01:09:38.932067 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-17 01:09:38.932178 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-17 01:09:38.932191 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-17 01:09:38.932203 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-17 01:09:38.932215 | orchestrator | 2025-05-17 01:09:38.932225 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-17 01:09:38.932237 | orchestrator | Saturday 17 May 2025 01:08:27 +0000 (0:00:14.912) 0:00:49.300 ********** 2025-05-17 01:09:38.932246 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:09:38.932254 | orchestrator | 2025-05-17 01:09:38.932262 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-17 01:09:38.932269 | orchestrator | Saturday 17 May 2025 01:08:28 +0000 (0:00:00.744) 0:00:50.045 ********** 2025-05-17 01:09:38.932292 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-05-17 01:09:38.932314 | orchestrator | 2025-05-17 01:09:38.932321 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:09:38.932328 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-17 01:09:38.932336 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:09:38.932351 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:09:38.932358 | orchestrator | 2025-05-17 01:09:38.932364 | orchestrator | 2025-05-17 01:09:38.932371 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:09:38.932377 | orchestrator | Saturday 17 May 2025 01:08:31 +0000 (0:00:03.199) 0:00:53.244 ********** 2025-05-17 01:09:38.932383 | orchestrator | =============================================================================== 2025-05-17 01:09:38.932389 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.91s 2025-05-17 01:09:38.932396 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.79s 2025-05-17 01:09:38.932402 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.46s 2025-05-17 01:09:38.932408 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.85s 2025-05-17 01:09:38.932414 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.33s 2025-05-17 01:09:38.932420 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.33s 2025-05-17 01:09:38.932426 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.20s 2025-05-17 01:09:38.932445 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.11s 2025-05-17 01:09:38.932451 | orchestrator | octavia : include_tasks ------------------------------------------------- 1.32s 2025-05-17 01:09:38.932457 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.74s 2025-05-17 01:09:38.932464 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2025-05-17 01:09:38.932477 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-05-17 01:09:38.932484 | orchestrator | 2025-05-17 01:09:38.932490 | orchestrator | 2025-05-17 01:09:38.932496 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 01:09:38.932502 | orchestrator | 2025-05-17 01:09:38.932509 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 01:09:38.932515 | orchestrator | Saturday 17 May 2025 01:06:11 +0000 (0:00:01.037) 0:00:01.037 ********** 2025-05-17 01:09:38.932521 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:09:38.932528 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:09:38.932535 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:09:38.932541 | orchestrator | 2025-05-17 01:09:38.932547 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 01:09:38.932553 | orchestrator | Saturday 17 May 2025 01:06:12 +0000 (0:00:00.836) 0:00:01.873 ********** 2025-05-17 01:09:38.932560 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-17 01:09:38.932566 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-17 01:09:38.932599 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-17 01:09:38.932606 | orchestrator | 2025-05-17 01:09:38.932612 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-17 01:09:38.932618 | orchestrator | 2025-05-17 01:09:38.932625 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-17 01:09:38.932637 | orchestrator | Saturday 17 May 2025 01:06:12 +0000 (0:00:00.945) 0:00:02.819 ********** 2025-05-17 01:09:38.932643 | orchestrator | 2025-05-17 01:09:38.932649 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-05-17 01:09:38.932655 | orchestrator | 2025-05-17 01:09:38.932662 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-05-17 01:09:38.932668 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:09:38.932674 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:09:38.932681 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:09:38.932687 | orchestrator | 2025-05-17 01:09:38.932693 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:09:38.932699 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:09:38.932706 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:09:38.932712 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:09:38.932718 | orchestrator | 2025-05-17 01:09:38.932724 | orchestrator | 2025-05-17 01:09:38.932730 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:09:38.932736 | orchestrator | Saturday 17 May 2025 01:09:24 +0000 (0:03:11.368) 0:03:14.188 ********** 2025-05-17 01:09:38.932743 | orchestrator | =============================================================================== 2025-05-17 01:09:38.932749 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 191.37s 2025-05-17 01:09:38.932755 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2025-05-17 01:09:38.932761 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.84s 2025-05-17 01:09:38.932767 | orchestrator | 2025-05-17 01:09:38.932773 | orchestrator | 2025-05-17 01:09:38.932785 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 01:09:38.932792 | orchestrator | 2025-05-17 01:09:38.932798 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 01:09:38.932804 | orchestrator | Saturday 17 May 2025 01:07:53 +0000 (0:00:00.307) 0:00:00.307 ********** 2025-05-17 01:09:38.932810 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:09:38.932816 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:09:38.932823 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:09:38.932829 | orchestrator | 2025-05-17 01:09:38.932835 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 01:09:38.932841 | orchestrator | Saturday 17 May 2025 01:07:53 +0000 (0:00:00.343) 0:00:00.651 ********** 2025-05-17 01:09:38.932847 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-17 01:09:38.932854 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-17 01:09:38.932863 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-17 01:09:38.932870 | orchestrator | 2025-05-17 01:09:38.932876 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-17 01:09:38.932882 | orchestrator | 2025-05-17 01:09:38.932888 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-17 01:09:38.932894 | orchestrator | Saturday 17 May 2025 01:07:53 +0000 (0:00:00.299) 0:00:00.951 ********** 2025-05-17 01:09:38.932901 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:09:38.932907 | orchestrator | 2025-05-17 01:09:38.932913 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-17 01:09:38.932919 | orchestrator | Saturday 17 May 2025 01:07:54 +0000 (0:00:00.676) 0:00:01.627 ********** 2025-05-17 01:09:38.932927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 01:09:38.932939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 01:09:38.932946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 01:09:38.932953 | orchestrator | 2025-05-17 01:09:38.932959 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-17 01:09:38.932965 | orchestrator | Saturday 17 May 2025 01:07:55 +0000 (0:00:00.900) 0:00:02.527 ********** 2025-05-17 01:09:38.932971 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-17 01:09:38.932977 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-17 01:09:38.932984 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-17 01:09:38.932990 | orchestrator | 2025-05-17 01:09:38.932996 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-17 01:09:38.933002 | orchestrator | Saturday 17 May 2025 01:07:55 +0000 (0:00:00.479) 0:00:03.007 ********** 2025-05-17 01:09:38.933008 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:09:38.933015 | orchestrator | 2025-05-17 01:09:38.933021 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-17 01:09:38.933027 | orchestrator | Saturday 17 May 2025 01:07:56 +0000 (0:00:00.593) 0:00:03.600 ********** 2025-05-17 01:09:38.933041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 01:09:38.933048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 01:09:38.933059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 01:09:38.933066 | orchestrator | 2025-05-17 01:09:38.933072 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-17 01:09:38.933078 | orchestrator | Saturday 17 May 2025 01:07:57 +0000 (0:00:01.335) 0:00:04.936 ********** 2025-05-17 01:09:38.933084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-17 01:09:38.933091 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:09:38.933098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-17 01:09:38.933104 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:09:38.933115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-17 01:09:38.933122 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:09:38.933128 | orchestrator | 2025-05-17 01:09:38.933134 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-17 01:09:38.933140 | orchestrator | Saturday 17 May 2025 01:07:58 +0000 (0:00:00.657) 0:00:05.594 ********** 2025-05-17 01:09:38.933150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-17 01:09:38.933162 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:09:38.933168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-17 01:09:38.933174 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:09:38.933181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-17 01:09:38.933187 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:09:38.933193 | orchestrator | 2025-05-17 01:09:38.933199 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-17 01:09:38.933206 | orchestrator | Saturday 17 May 2025 01:07:59 +0000 (0:00:00.631) 0:00:06.225 ********** 2025-05-17 01:09:38.933212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 01:09:38.933219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 01:09:38.933232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 01:09:38.933243 | orchestrator | 2025-05-17 01:09:38.933250 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-17 01:09:38.933256 | orchestrator | Saturday 17 May 2025 01:08:00 +0000 (0:00:01.310) 0:00:07.536 ********** 2025-05-17 01:09:38.933262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 01:09:38.933269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 01:09:38.933276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 01:09:38.933282 | orchestrator | 2025-05-17 01:09:38.933288 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-17 01:09:38.933294 | orchestrator | Saturday 17 May 2025 01:08:01 +0000 (0:00:01.445) 0:00:08.982 ********** 2025-05-17 01:09:38.933301 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:09:38.933307 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:09:38.933313 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:09:38.933319 | orchestrator | 2025-05-17 01:09:38.933325 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-17 01:09:38.933332 | orchestrator | Saturday 17 May 2025 01:08:02 +0000 (0:00:00.315) 0:00:09.298 ********** 2025-05-17 01:09:38.933338 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-17 01:09:38.933344 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-17 01:09:38.933350 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-17 01:09:38.933356 | orchestrator | 2025-05-17 01:09:38.933362 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-17 01:09:38.933375 | orchestrator | Saturday 17 May 2025 01:08:03 +0000 (0:00:01.432) 0:00:10.730 ********** 2025-05-17 01:09:38.933381 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-17 01:09:38.933387 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-17 01:09:38.933397 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-17 01:09:38.933404 | orchestrator | 2025-05-17 01:09:38.933410 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-17 01:09:38.933416 | orchestrator | Saturday 17 May 2025 01:08:04 +0000 (0:00:01.400) 0:00:12.130 ********** 2025-05-17 01:09:38.933422 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-17 01:09:38.933428 | orchestrator | 2025-05-17 01:09:38.933435 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-17 01:09:38.933441 | orchestrator | Saturday 17 May 2025 01:08:05 +0000 (0:00:00.423) 0:00:12.554 ********** 2025-05-17 01:09:38.933447 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-17 01:09:38.933460 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-17 01:09:38.933467 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:09:38.933473 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:09:38.933479 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:09:38.933485 | orchestrator | 2025-05-17 01:09:38.933492 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-17 01:09:38.933498 | orchestrator | Saturday 17 May 2025 01:08:06 +0000 (0:00:00.805) 0:00:13.360 ********** 2025-05-17 01:09:38.933504 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:09:38.933510 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:09:38.933516 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:09:38.933523 | orchestrator | 2025-05-17 01:09:38.933529 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-17 01:09:38.933535 | orchestrator | Saturday 17 May 2025 01:08:06 +0000 (0:00:00.408) 0:00:13.768 ********** 2025-05-17 01:09:38.933542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090341, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1086898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.933549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090341, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1086898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.933556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090341, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1086898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.933567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090322, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0936897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.933596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090322, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0936897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.933607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090322, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0936897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.933613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090306, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0916896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.933620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090306, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0916896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.933626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090306, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0916896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.933636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090336, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0946896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090336, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0946896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090336, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0946896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090282, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0876896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090282, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0876896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090282, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0876896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090308, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0916896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090308, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0916896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090308, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0916896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090334, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0946896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090334, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0946896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090334, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0946896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090277, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0866897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090277, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0866897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090277, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0866897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090130, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0546892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090130, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0546892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090130, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0546892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090285, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0876896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090285, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0876896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090285, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0876896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090144, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0586894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090144, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0586894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090144, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0586894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1090328, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.0946896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1090328, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.0946896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1090328, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.0946896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1090289, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.0906897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1090289, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.0906897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1090289, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.0906897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090340, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0956898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090340, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0956898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090340, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0956898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090275, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0866897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090275, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0866897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090275, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0866897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090309, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0936897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090309, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0936897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090309, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0936897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090132, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0576892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090132, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0576892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090132, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0576892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090146, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0606892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090146, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0606892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090146, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0606892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090303, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0916896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090303, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0916896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090303, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.0916896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090482, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1776905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090482, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1776905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090482, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1776905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090479, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.12169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090479, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.12169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090479, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.12169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090541, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1876907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090541, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1876907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090541, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1876907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090399, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.10969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090399, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.10969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090399, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.10969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090546, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1916907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090546, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1916907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090546, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1916907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090529, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1796906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090529, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1796906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090529, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1796906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090535, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1816907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090535, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1816907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090535, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1816907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090403, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.10969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090403, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.10969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090403, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.10969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090481, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.12269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090481, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.12269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090481, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.12269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090548, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1936908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090548, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1936908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090548, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1936908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1090539, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.1836905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1090539, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.1836905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1090539, 'dev': 152, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747440969.1836905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090416, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.11269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090416, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.11269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090416, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.11269, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090411, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1106899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090411, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1106899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090411, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1106899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090432, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1146898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090432, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1146898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090432, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.1146898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090437, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.12169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.934999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090437, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.12169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.935006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090437, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.12169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.935012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090550, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2096908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.935019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090550, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2096908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.935025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090550, 'dev': 152, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747440969.2096908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-17 01:09:38.935032 | orchestrator | 2025-05-17 01:09:38.935039 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-17 01:09:38.935045 | orchestrator | Saturday 17 May 2025 01:08:38 +0000 (0:00:32.071) 0:00:45.840 ********** 2025-05-17 01:09:38.935060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 01:09:38.935071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 01:09:38.935078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-17 01:09:38.935085 | orchestrator | 2025-05-17 01:09:38.935091 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-17 01:09:38.935098 | orchestrator | Saturday 17 May 2025 01:08:39 +0000 (0:00:00.970) 0:00:46.810 ********** 2025-05-17 01:09:38.935104 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:09:38.935111 | orchestrator | 2025-05-17 01:09:38.935117 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-17 01:09:38.935123 | orchestrator | Saturday 17 May 2025 01:08:42 +0000 (0:00:02.534) 0:00:49.345 ********** 2025-05-17 01:09:38.935129 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:09:38.935136 | orchestrator | 2025-05-17 01:09:38.935142 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-17 01:09:38.935148 | orchestrator | Saturday 17 May 2025 01:08:44 +0000 (0:00:02.181) 0:00:51.526 ********** 2025-05-17 01:09:38.935154 | orchestrator | 2025-05-17 01:09:38.935160 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-17 01:09:38.935166 | orchestrator | Saturday 17 May 2025 01:08:44 +0000 (0:00:00.052) 0:00:51.579 ********** 2025-05-17 01:09:38.935172 | orchestrator | 2025-05-17 01:09:38.935178 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-17 01:09:38.935184 | orchestrator | Saturday 17 May 2025 01:08:44 +0000 (0:00:00.048) 0:00:51.628 ********** 2025-05-17 01:09:38.935190 | orchestrator | 2025-05-17 01:09:38.935196 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-17 01:09:38.935202 | orchestrator | Saturday 17 May 2025 01:08:44 +0000 (0:00:00.156) 0:00:51.785 ********** 2025-05-17 01:09:38.935208 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:09:38.935215 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:09:38.935221 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:09:38.935227 | orchestrator | 2025-05-17 01:09:38.935233 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-17 01:09:38.935244 | orchestrator | Saturday 17 May 2025 01:08:51 +0000 (0:00:06.769) 0:00:58.554 ********** 2025-05-17 01:09:38.935250 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:09:38.935256 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:09:38.935262 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-17 01:09:38.935269 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-17 01:09:38.935275 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:09:38.935281 | orchestrator | 2025-05-17 01:09:38.935287 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-17 01:09:38.935294 | orchestrator | Saturday 17 May 2025 01:09:17 +0000 (0:00:26.593) 0:01:25.147 ********** 2025-05-17 01:09:38.935300 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:09:38.935306 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:09:38.935312 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:09:38.935318 | orchestrator | 2025-05-17 01:09:38.935324 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-17 01:09:38.935331 | orchestrator | Saturday 17 May 2025 01:09:32 +0000 (0:00:15.024) 0:01:40.172 ********** 2025-05-17 01:09:38.935337 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:09:38.935343 | orchestrator | 2025-05-17 01:09:38.935385 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-17 01:09:38.935393 | orchestrator | Saturday 17 May 2025 01:09:35 +0000 (0:00:02.226) 0:01:42.398 ********** 2025-05-17 01:09:38.935399 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:09:38.935405 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:09:38.935411 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:09:38.935418 | orchestrator | 2025-05-17 01:09:38.935424 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-17 01:09:38.935430 | orchestrator | Saturday 17 May 2025 01:09:35 +0000 (0:00:00.394) 0:01:42.793 ********** 2025-05-17 01:09:38.935440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-17 01:09:38.935448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-17 01:09:38.935455 | orchestrator | 2025-05-17 01:09:38.935461 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-17 01:09:38.935467 | orchestrator | Saturday 17 May 2025 01:09:37 +0000 (0:00:02.330) 0:01:45.124 ********** 2025-05-17 01:09:38.935474 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:09:38.935480 | orchestrator | 2025-05-17 01:09:38.935486 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:09:38.935492 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-17 01:09:38.935500 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-17 01:09:38.935506 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-17 01:09:38.935512 | orchestrator | 2025-05-17 01:09:38.935518 | orchestrator | 2025-05-17 01:09:38.935524 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:09:38.935530 | orchestrator | Saturday 17 May 2025 01:09:38 +0000 (0:00:00.442) 0:01:45.566 ********** 2025-05-17 01:09:38.935541 | orchestrator | =============================================================================== 2025-05-17 01:09:38.935547 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 32.07s 2025-05-17 01:09:38.935553 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.59s 2025-05-17 01:09:38.935559 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 15.02s 2025-05-17 01:09:38.935565 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.77s 2025-05-17 01:09:38.935606 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.53s 2025-05-17 01:09:38.935613 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.33s 2025-05-17 01:09:38.935620 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.23s 2025-05-17 01:09:38.935626 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.18s 2025-05-17 01:09:38.935632 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.45s 2025-05-17 01:09:38.935638 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.43s 2025-05-17 01:09:38.935644 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.40s 2025-05-17 01:09:38.935650 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.34s 2025-05-17 01:09:38.935657 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.31s 2025-05-17 01:09:38.935663 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.97s 2025-05-17 01:09:38.935669 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.90s 2025-05-17 01:09:38.935675 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.81s 2025-05-17 01:09:38.935681 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.68s 2025-05-17 01:09:38.935687 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.66s 2025-05-17 01:09:38.935693 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.63s 2025-05-17 01:09:38.935700 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.59s 2025-05-17 01:09:38.935706 | orchestrator | 2025-05-17 01:09:38 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:38.935712 | orchestrator | 2025-05-17 01:09:38 | INFO  | Task 6e328ad9-e683-4c2f-bbed-070148a3884d is in state SUCCESS 2025-05-17 01:09:38.935718 | orchestrator | 2025-05-17 01:09:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:41.979008 | orchestrator | 2025-05-17 01:09:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:41.980500 | orchestrator | 2025-05-17 01:09:41 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:41.980539 | orchestrator | 2025-05-17 01:09:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:45.031194 | orchestrator | 2025-05-17 01:09:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:45.031864 | orchestrator | 2025-05-17 01:09:45 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:45.031910 | orchestrator | 2025-05-17 01:09:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:48.088003 | orchestrator | 2025-05-17 01:09:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:48.088654 | orchestrator | 2025-05-17 01:09:48 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:48.088844 | orchestrator | 2025-05-17 01:09:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:51.142102 | orchestrator | 2025-05-17 01:09:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:51.142274 | orchestrator | 2025-05-17 01:09:51 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:51.142313 | orchestrator | 2025-05-17 01:09:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:54.191000 | orchestrator | 2025-05-17 01:09:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:54.193063 | orchestrator | 2025-05-17 01:09:54 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:54.193108 | orchestrator | 2025-05-17 01:09:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:09:57.246173 | orchestrator | 2025-05-17 01:09:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:09:57.247210 | orchestrator | 2025-05-17 01:09:57 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:09:57.247259 | orchestrator | 2025-05-17 01:09:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:00.285463 | orchestrator | 2025-05-17 01:10:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:00.287849 | orchestrator | 2025-05-17 01:10:00 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:00.288275 | orchestrator | 2025-05-17 01:10:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:03.336672 | orchestrator | 2025-05-17 01:10:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:03.338851 | orchestrator | 2025-05-17 01:10:03 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:03.338931 | orchestrator | 2025-05-17 01:10:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:06.398188 | orchestrator | 2025-05-17 01:10:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:06.400228 | orchestrator | 2025-05-17 01:10:06 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:06.400274 | orchestrator | 2025-05-17 01:10:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:09.440352 | orchestrator | 2025-05-17 01:10:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:09.443162 | orchestrator | 2025-05-17 01:10:09 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:09.443510 | orchestrator | 2025-05-17 01:10:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:12.497422 | orchestrator | 2025-05-17 01:10:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:12.498871 | orchestrator | 2025-05-17 01:10:12 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:12.498919 | orchestrator | 2025-05-17 01:10:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:15.537262 | orchestrator | 2025-05-17 01:10:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:15.537397 | orchestrator | 2025-05-17 01:10:15 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:15.537409 | orchestrator | 2025-05-17 01:10:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:18.576783 | orchestrator | 2025-05-17 01:10:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:18.577102 | orchestrator | 2025-05-17 01:10:18 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:18.577142 | orchestrator | 2025-05-17 01:10:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:21.621837 | orchestrator | 2025-05-17 01:10:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:21.622929 | orchestrator | 2025-05-17 01:10:21 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:21.622966 | orchestrator | 2025-05-17 01:10:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:24.665619 | orchestrator | 2025-05-17 01:10:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:24.669622 | orchestrator | 2025-05-17 01:10:24 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:24.669705 | orchestrator | 2025-05-17 01:10:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:27.733701 | orchestrator | 2025-05-17 01:10:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:27.736237 | orchestrator | 2025-05-17 01:10:27 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:27.736290 | orchestrator | 2025-05-17 01:10:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:30.787324 | orchestrator | 2025-05-17 01:10:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:30.787432 | orchestrator | 2025-05-17 01:10:30 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:30.789231 | orchestrator | 2025-05-17 01:10:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:33.832867 | orchestrator | 2025-05-17 01:10:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:33.833816 | orchestrator | 2025-05-17 01:10:33 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:33.833852 | orchestrator | 2025-05-17 01:10:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:36.883677 | orchestrator | 2025-05-17 01:10:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:36.883786 | orchestrator | 2025-05-17 01:10:36 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:36.883870 | orchestrator | 2025-05-17 01:10:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:39.922844 | orchestrator | 2025-05-17 01:10:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:39.924080 | orchestrator | 2025-05-17 01:10:39 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:39.924123 | orchestrator | 2025-05-17 01:10:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:42.981624 | orchestrator | 2025-05-17 01:10:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:42.981745 | orchestrator | 2025-05-17 01:10:42 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:42.981760 | orchestrator | 2025-05-17 01:10:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:46.036029 | orchestrator | 2025-05-17 01:10:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:46.036692 | orchestrator | 2025-05-17 01:10:46 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:46.036727 | orchestrator | 2025-05-17 01:10:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:49.083600 | orchestrator | 2025-05-17 01:10:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:49.085850 | orchestrator | 2025-05-17 01:10:49 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:49.085926 | orchestrator | 2025-05-17 01:10:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:52.129301 | orchestrator | 2025-05-17 01:10:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:52.130583 | orchestrator | 2025-05-17 01:10:52 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:52.130702 | orchestrator | 2025-05-17 01:10:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:55.181207 | orchestrator | 2025-05-17 01:10:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:55.182374 | orchestrator | 2025-05-17 01:10:55 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:55.182441 | orchestrator | 2025-05-17 01:10:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:10:58.233119 | orchestrator | 2025-05-17 01:10:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:10:58.234383 | orchestrator | 2025-05-17 01:10:58 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:10:58.234485 | orchestrator | 2025-05-17 01:10:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:01.293603 | orchestrator | 2025-05-17 01:11:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:01.293850 | orchestrator | 2025-05-17 01:11:01 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:01.294943 | orchestrator | 2025-05-17 01:11:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:04.335676 | orchestrator | 2025-05-17 01:11:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:04.337178 | orchestrator | 2025-05-17 01:11:04 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:04.337209 | orchestrator | 2025-05-17 01:11:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:07.390985 | orchestrator | 2025-05-17 01:11:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:07.391093 | orchestrator | 2025-05-17 01:11:07 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:07.391110 | orchestrator | 2025-05-17 01:11:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:10.435633 | orchestrator | 2025-05-17 01:11:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:10.436409 | orchestrator | 2025-05-17 01:11:10 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:10.436438 | orchestrator | 2025-05-17 01:11:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:13.490966 | orchestrator | 2025-05-17 01:11:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:13.492601 | orchestrator | 2025-05-17 01:11:13 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:13.492635 | orchestrator | 2025-05-17 01:11:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:16.548964 | orchestrator | 2025-05-17 01:11:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:16.550142 | orchestrator | 2025-05-17 01:11:16 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:16.550189 | orchestrator | 2025-05-17 01:11:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:19.591460 | orchestrator | 2025-05-17 01:11:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:19.592987 | orchestrator | 2025-05-17 01:11:19 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:19.593057 | orchestrator | 2025-05-17 01:11:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:22.652923 | orchestrator | 2025-05-17 01:11:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:22.653038 | orchestrator | 2025-05-17 01:11:22 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:22.653052 | orchestrator | 2025-05-17 01:11:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:25.723602 | orchestrator | 2025-05-17 01:11:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:25.724952 | orchestrator | 2025-05-17 01:11:25 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:25.724994 | orchestrator | 2025-05-17 01:11:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:28.778334 | orchestrator | 2025-05-17 01:11:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:28.780543 | orchestrator | 2025-05-17 01:11:28 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:28.780588 | orchestrator | 2025-05-17 01:11:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:31.832607 | orchestrator | 2025-05-17 01:11:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:31.833737 | orchestrator | 2025-05-17 01:11:31 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:31.833907 | orchestrator | 2025-05-17 01:11:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:34.886679 | orchestrator | 2025-05-17 01:11:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:34.888745 | orchestrator | 2025-05-17 01:11:34 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:34.888772 | orchestrator | 2025-05-17 01:11:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:37.937703 | orchestrator | 2025-05-17 01:11:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:37.939583 | orchestrator | 2025-05-17 01:11:37 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:37.939686 | orchestrator | 2025-05-17 01:11:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:40.988018 | orchestrator | 2025-05-17 01:11:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:40.989665 | orchestrator | 2025-05-17 01:11:40 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:40.989697 | orchestrator | 2025-05-17 01:11:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:44.039287 | orchestrator | 2025-05-17 01:11:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:44.040128 | orchestrator | 2025-05-17 01:11:44 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:44.040197 | orchestrator | 2025-05-17 01:11:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:47.097734 | orchestrator | 2025-05-17 01:11:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:47.099739 | orchestrator | 2025-05-17 01:11:47 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:47.099867 | orchestrator | 2025-05-17 01:11:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:50.155219 | orchestrator | 2025-05-17 01:11:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:50.158433 | orchestrator | 2025-05-17 01:11:50 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:50.158520 | orchestrator | 2025-05-17 01:11:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:53.206766 | orchestrator | 2025-05-17 01:11:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:53.208276 | orchestrator | 2025-05-17 01:11:53 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:53.208308 | orchestrator | 2025-05-17 01:11:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:56.255878 | orchestrator | 2025-05-17 01:11:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:56.257571 | orchestrator | 2025-05-17 01:11:56 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:56.257606 | orchestrator | 2025-05-17 01:11:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:11:59.301658 | orchestrator | 2025-05-17 01:11:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:11:59.302714 | orchestrator | 2025-05-17 01:11:59 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:11:59.302761 | orchestrator | 2025-05-17 01:11:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:02.352540 | orchestrator | 2025-05-17 01:12:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:02.353926 | orchestrator | 2025-05-17 01:12:02 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:02.353977 | orchestrator | 2025-05-17 01:12:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:05.401644 | orchestrator | 2025-05-17 01:12:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:05.403312 | orchestrator | 2025-05-17 01:12:05 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:05.403388 | orchestrator | 2025-05-17 01:12:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:08.454973 | orchestrator | 2025-05-17 01:12:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:08.455543 | orchestrator | 2025-05-17 01:12:08 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:08.455585 | orchestrator | 2025-05-17 01:12:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:11.502909 | orchestrator | 2025-05-17 01:12:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:11.504686 | orchestrator | 2025-05-17 01:12:11 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:11.504719 | orchestrator | 2025-05-17 01:12:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:14.548012 | orchestrator | 2025-05-17 01:12:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:14.548601 | orchestrator | 2025-05-17 01:12:14 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:14.548656 | orchestrator | 2025-05-17 01:12:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:17.591407 | orchestrator | 2025-05-17 01:12:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:17.592873 | orchestrator | 2025-05-17 01:12:17 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:17.593394 | orchestrator | 2025-05-17 01:12:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:20.645711 | orchestrator | 2025-05-17 01:12:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:20.646654 | orchestrator | 2025-05-17 01:12:20 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:20.646685 | orchestrator | 2025-05-17 01:12:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:23.691647 | orchestrator | 2025-05-17 01:12:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:23.692245 | orchestrator | 2025-05-17 01:12:23 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:23.692297 | orchestrator | 2025-05-17 01:12:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:26.749750 | orchestrator | 2025-05-17 01:12:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:26.750700 | orchestrator | 2025-05-17 01:12:26 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:26.750755 | orchestrator | 2025-05-17 01:12:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:29.805047 | orchestrator | 2025-05-17 01:12:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:29.807247 | orchestrator | 2025-05-17 01:12:29 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:29.807291 | orchestrator | 2025-05-17 01:12:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:32.859400 | orchestrator | 2025-05-17 01:12:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:32.860822 | orchestrator | 2025-05-17 01:12:32 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:32.860857 | orchestrator | 2025-05-17 01:12:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:35.917535 | orchestrator | 2025-05-17 01:12:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:35.918458 | orchestrator | 2025-05-17 01:12:35 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:35.918519 | orchestrator | 2025-05-17 01:12:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:38.970846 | orchestrator | 2025-05-17 01:12:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:38.970956 | orchestrator | 2025-05-17 01:12:38 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:38.970972 | orchestrator | 2025-05-17 01:12:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:42.019776 | orchestrator | 2025-05-17 01:12:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:42.021292 | orchestrator | 2025-05-17 01:12:42 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:42.021607 | orchestrator | 2025-05-17 01:12:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:45.064960 | orchestrator | 2025-05-17 01:12:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:45.066331 | orchestrator | 2025-05-17 01:12:45 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:45.066925 | orchestrator | 2025-05-17 01:12:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:48.118889 | orchestrator | 2025-05-17 01:12:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:48.120318 | orchestrator | 2025-05-17 01:12:48 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:48.120350 | orchestrator | 2025-05-17 01:12:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:51.168381 | orchestrator | 2025-05-17 01:12:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:51.170471 | orchestrator | 2025-05-17 01:12:51 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:51.170513 | orchestrator | 2025-05-17 01:12:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:54.222117 | orchestrator | 2025-05-17 01:12:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:54.222842 | orchestrator | 2025-05-17 01:12:54 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:54.222870 | orchestrator | 2025-05-17 01:12:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:12:57.271055 | orchestrator | 2025-05-17 01:12:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:12:57.272773 | orchestrator | 2025-05-17 01:12:57 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:12:57.273777 | orchestrator | 2025-05-17 01:12:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:00.318260 | orchestrator | 2025-05-17 01:13:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:00.319046 | orchestrator | 2025-05-17 01:13:00 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:13:00.319085 | orchestrator | 2025-05-17 01:13:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:03.358814 | orchestrator | 2025-05-17 01:13:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:03.358924 | orchestrator | 2025-05-17 01:13:03 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:13:03.358938 | orchestrator | 2025-05-17 01:13:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:06.388142 | orchestrator | 2025-05-17 01:13:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:06.388284 | orchestrator | 2025-05-17 01:13:06 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:13:06.388296 | orchestrator | 2025-05-17 01:13:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:09.445436 | orchestrator | 2025-05-17 01:13:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:09.447493 | orchestrator | 2025-05-17 01:13:09 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:13:09.447548 | orchestrator | 2025-05-17 01:13:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:12.489403 | orchestrator | 2025-05-17 01:13:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:12.490152 | orchestrator | 2025-05-17 01:13:12 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:13:12.490252 | orchestrator | 2025-05-17 01:13:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:15.543068 | orchestrator | 2025-05-17 01:13:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:15.543281 | orchestrator | 2025-05-17 01:13:15 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:13:15.543302 | orchestrator | 2025-05-17 01:13:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:18.588681 | orchestrator | 2025-05-17 01:13:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:18.590100 | orchestrator | 2025-05-17 01:13:18 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:13:18.590194 | orchestrator | 2025-05-17 01:13:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:21.636219 | orchestrator | 2025-05-17 01:13:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:21.637981 | orchestrator | 2025-05-17 01:13:21 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:13:21.638071 | orchestrator | 2025-05-17 01:13:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:24.695144 | orchestrator | 2025-05-17 01:13:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:24.695984 | orchestrator | 2025-05-17 01:13:24 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:13:24.696026 | orchestrator | 2025-05-17 01:13:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:27.741421 | orchestrator | 2025-05-17 01:13:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:27.742962 | orchestrator | 2025-05-17 01:13:27 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:13:27.743060 | orchestrator | 2025-05-17 01:13:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:30.789809 | orchestrator | 2025-05-17 01:13:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:30.791372 | orchestrator | 2025-05-17 01:13:30 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:13:30.791415 | orchestrator | 2025-05-17 01:13:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:33.846451 | orchestrator | 2025-05-17 01:13:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:33.848813 | orchestrator | 2025-05-17 01:13:33 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:13:33.848849 | orchestrator | 2025-05-17 01:13:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:36.907886 | orchestrator | 2025-05-17 01:13:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:36.908157 | orchestrator | 2025-05-17 01:13:36 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:13:36.908182 | orchestrator | 2025-05-17 01:13:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:39.960621 | orchestrator | 2025-05-17 01:13:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:39.961192 | orchestrator | 2025-05-17 01:13:39 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state STARTED 2025-05-17 01:13:39.961225 | orchestrator | 2025-05-17 01:13:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:43.014468 | orchestrator | 2025-05-17 01:13:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:43.019877 | orchestrator | 2025-05-17 01:13:43 | INFO  | Task 84fed6b0-02c6-4ce9-ae8f-e31ff32fbb79 is in state SUCCESS 2025-05-17 01:13:43.022155 | orchestrator | 2025-05-17 01:13:43.022280 | orchestrator | 2025-05-17 01:13:43.022405 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-17 01:13:43.022448 | orchestrator | 2025-05-17 01:13:43.022465 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-17 01:13:43.022515 | orchestrator | Saturday 17 May 2025 01:05:34 +0000 (0:00:00.213) 0:00:00.213 ********** 2025-05-17 01:13:43.022609 | orchestrator | changed: [testbed-manager] 2025-05-17 01:13:43.022627 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.022636 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:13:43.022648 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:13:43.022662 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:13:43.022677 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:13:43.022692 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:13:43.022706 | orchestrator | 2025-05-17 01:13:43.022720 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-17 01:13:43.022729 | orchestrator | Saturday 17 May 2025 01:05:35 +0000 (0:00:00.732) 0:00:00.946 ********** 2025-05-17 01:13:43.022738 | orchestrator | changed: [testbed-manager] 2025-05-17 01:13:43.022747 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.022756 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:13:43.022764 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:13:43.022773 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:13:43.022782 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:13:43.022790 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:13:43.022805 | orchestrator | 2025-05-17 01:13:43.022820 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-17 01:13:43.022835 | orchestrator | Saturday 17 May 2025 01:05:36 +0000 (0:00:00.743) 0:00:01.689 ********** 2025-05-17 01:13:43.022850 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-17 01:13:43.022867 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-17 01:13:43.022882 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-17 01:13:43.022897 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-17 01:13:43.022911 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-17 01:13:43.022927 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-17 01:13:43.022937 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-17 01:13:43.022945 | orchestrator | 2025-05-17 01:13:43.022954 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-17 01:13:43.022963 | orchestrator | 2025-05-17 01:13:43.022974 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-17 01:13:43.022988 | orchestrator | Saturday 17 May 2025 01:05:37 +0000 (0:00:00.793) 0:00:02.482 ********** 2025-05-17 01:13:43.023004 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:13:43.023018 | orchestrator | 2025-05-17 01:13:43.023033 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-17 01:13:43.023042 | orchestrator | Saturday 17 May 2025 01:05:37 +0000 (0:00:00.634) 0:00:03.117 ********** 2025-05-17 01:13:43.023053 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-17 01:13:43.023068 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-17 01:13:43.023084 | orchestrator | 2025-05-17 01:13:43.023099 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-17 01:13:43.023112 | orchestrator | Saturday 17 May 2025 01:05:42 +0000 (0:00:04.264) 0:00:07.381 ********** 2025-05-17 01:13:43.023121 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-17 01:13:43.023130 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-17 01:13:43.023138 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.023147 | orchestrator | 2025-05-17 01:13:43.023186 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-17 01:13:43.023212 | orchestrator | Saturday 17 May 2025 01:05:46 +0000 (0:00:04.283) 0:00:11.665 ********** 2025-05-17 01:13:43.023222 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.023230 | orchestrator | 2025-05-17 01:13:43.023239 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-17 01:13:43.023248 | orchestrator | Saturday 17 May 2025 01:05:46 +0000 (0:00:00.586) 0:00:12.251 ********** 2025-05-17 01:13:43.023256 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.023276 | orchestrator | 2025-05-17 01:13:43.023284 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-17 01:13:43.023293 | orchestrator | Saturday 17 May 2025 01:05:48 +0000 (0:00:01.518) 0:00:13.770 ********** 2025-05-17 01:13:43.023302 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.023310 | orchestrator | 2025-05-17 01:13:43.023318 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-17 01:13:43.023327 | orchestrator | Saturday 17 May 2025 01:05:51 +0000 (0:00:02.846) 0:00:16.616 ********** 2025-05-17 01:13:43.023335 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.023344 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.023353 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.023361 | orchestrator | 2025-05-17 01:13:43.023396 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-17 01:13:43.023406 | orchestrator | Saturday 17 May 2025 01:05:51 +0000 (0:00:00.427) 0:00:17.044 ********** 2025-05-17 01:13:43.023415 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:13:43.023424 | orchestrator | 2025-05-17 01:13:43.023433 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-17 01:13:43.023441 | orchestrator | Saturday 17 May 2025 01:06:19 +0000 (0:00:27.667) 0:00:44.712 ********** 2025-05-17 01:13:43.023450 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.023458 | orchestrator | 2025-05-17 01:13:43.023467 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-17 01:13:43.023475 | orchestrator | Saturday 17 May 2025 01:06:31 +0000 (0:00:12.020) 0:00:56.732 ********** 2025-05-17 01:13:43.023508 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:13:43.023518 | orchestrator | 2025-05-17 01:13:43.023526 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-17 01:13:43.023535 | orchestrator | Saturday 17 May 2025 01:06:41 +0000 (0:00:10.015) 0:01:06.748 ********** 2025-05-17 01:13:43.023560 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:13:43.023570 | orchestrator | 2025-05-17 01:13:43.023578 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-17 01:13:43.023587 | orchestrator | Saturday 17 May 2025 01:06:43 +0000 (0:00:01.837) 0:01:08.585 ********** 2025-05-17 01:13:43.023596 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.023604 | orchestrator | 2025-05-17 01:13:43.023613 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-17 01:13:43.023622 | orchestrator | Saturday 17 May 2025 01:06:44 +0000 (0:00:01.460) 0:01:10.045 ********** 2025-05-17 01:13:43.023631 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:13:43.023640 | orchestrator | 2025-05-17 01:13:43.023648 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-17 01:13:43.023657 | orchestrator | Saturday 17 May 2025 01:06:45 +0000 (0:00:00.818) 0:01:10.863 ********** 2025-05-17 01:13:43.023665 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:13:43.023674 | orchestrator | 2025-05-17 01:13:43.023683 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-17 01:13:43.023692 | orchestrator | Saturday 17 May 2025 01:07:00 +0000 (0:00:15.154) 0:01:26.018 ********** 2025-05-17 01:13:43.023700 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.023709 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.023718 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.023727 | orchestrator | 2025-05-17 01:13:43.023735 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-17 01:13:43.023744 | orchestrator | 2025-05-17 01:13:43.023752 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-17 01:13:43.023761 | orchestrator | Saturday 17 May 2025 01:07:01 +0000 (0:00:00.289) 0:01:26.308 ********** 2025-05-17 01:13:43.023769 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:13:43.023778 | orchestrator | 2025-05-17 01:13:43.023787 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-17 01:13:43.023806 | orchestrator | Saturday 17 May 2025 01:07:01 +0000 (0:00:00.831) 0:01:27.140 ********** 2025-05-17 01:13:43.023815 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.023824 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.023832 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.023841 | orchestrator | 2025-05-17 01:13:43.023850 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-17 01:13:43.023858 | orchestrator | Saturday 17 May 2025 01:07:04 +0000 (0:00:02.118) 0:01:29.258 ********** 2025-05-17 01:13:43.023867 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.023875 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.023884 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.023892 | orchestrator | 2025-05-17 01:13:43.023901 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-17 01:13:43.023909 | orchestrator | Saturday 17 May 2025 01:07:06 +0000 (0:00:02.145) 0:01:31.404 ********** 2025-05-17 01:13:43.023918 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.023927 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.023935 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.023944 | orchestrator | 2025-05-17 01:13:43.023955 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-17 01:13:43.023970 | orchestrator | Saturday 17 May 2025 01:07:06 +0000 (0:00:00.681) 0:01:32.085 ********** 2025-05-17 01:13:43.024023 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-17 01:13:43.024039 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.024052 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-17 01:13:43.024067 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.024082 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-17 01:13:43.024098 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-17 01:13:43.024112 | orchestrator | 2025-05-17 01:13:43.024138 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-17 01:13:43.024147 | orchestrator | Saturday 17 May 2025 01:07:15 +0000 (0:00:08.588) 0:01:40.673 ********** 2025-05-17 01:13:43.024156 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.024165 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.024173 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.024182 | orchestrator | 2025-05-17 01:13:43.024191 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-17 01:13:43.024199 | orchestrator | Saturday 17 May 2025 01:07:15 +0000 (0:00:00.450) 0:01:41.124 ********** 2025-05-17 01:13:43.024208 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-17 01:13:43.024217 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.024225 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-17 01:13:43.024234 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.024243 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-17 01:13:43.024252 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.024260 | orchestrator | 2025-05-17 01:13:43.024269 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-17 01:13:43.024278 | orchestrator | Saturday 17 May 2025 01:07:16 +0000 (0:00:00.814) 0:01:41.938 ********** 2025-05-17 01:13:43.024287 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.024295 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.024304 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.024342 | orchestrator | 2025-05-17 01:13:43.024353 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-17 01:13:43.024361 | orchestrator | Saturday 17 May 2025 01:07:17 +0000 (0:00:00.434) 0:01:42.373 ********** 2025-05-17 01:13:43.024370 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.024379 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.024387 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.024397 | orchestrator | 2025-05-17 01:13:43.024423 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-17 01:13:43.024438 | orchestrator | Saturday 17 May 2025 01:07:18 +0000 (0:00:01.046) 0:01:43.420 ********** 2025-05-17 01:13:43.024452 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.024468 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.024657 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.024697 | orchestrator | 2025-05-17 01:13:43.024706 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-17 01:13:43.024715 | orchestrator | Saturday 17 May 2025 01:07:20 +0000 (0:00:02.236) 0:01:45.657 ********** 2025-05-17 01:13:43.024724 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.024733 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.024742 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:13:43.024751 | orchestrator | 2025-05-17 01:13:43.024759 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-17 01:13:43.024768 | orchestrator | Saturday 17 May 2025 01:07:39 +0000 (0:00:19.043) 0:02:04.701 ********** 2025-05-17 01:13:43.024778 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.024794 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.024808 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:13:43.024822 | orchestrator | 2025-05-17 01:13:43.024832 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-17 01:13:43.024841 | orchestrator | Saturday 17 May 2025 01:07:50 +0000 (0:00:10.936) 0:02:15.637 ********** 2025-05-17 01:13:43.024849 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:13:43.024858 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.024867 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.024875 | orchestrator | 2025-05-17 01:13:43.024883 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-17 01:13:43.024892 | orchestrator | Saturday 17 May 2025 01:07:51 +0000 (0:00:01.197) 0:02:16.834 ********** 2025-05-17 01:13:43.024900 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.024909 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.024918 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.024926 | orchestrator | 2025-05-17 01:13:43.024934 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-17 01:13:43.024943 | orchestrator | Saturday 17 May 2025 01:08:01 +0000 (0:00:10.193) 0:02:27.027 ********** 2025-05-17 01:13:43.024950 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.024958 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.024966 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.024974 | orchestrator | 2025-05-17 01:13:43.024981 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-17 01:13:43.024989 | orchestrator | Saturday 17 May 2025 01:08:03 +0000 (0:00:01.452) 0:02:28.480 ********** 2025-05-17 01:13:43.024996 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.025008 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.025022 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.025035 | orchestrator | 2025-05-17 01:13:43.025044 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-17 01:13:43.025052 | orchestrator | 2025-05-17 01:13:43.025060 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-17 01:13:43.025068 | orchestrator | Saturday 17 May 2025 01:08:03 +0000 (0:00:00.463) 0:02:28.944 ********** 2025-05-17 01:13:43.025075 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:13:43.025084 | orchestrator | 2025-05-17 01:13:43.025092 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-17 01:13:43.025099 | orchestrator | Saturday 17 May 2025 01:08:04 +0000 (0:00:00.631) 0:02:29.576 ********** 2025-05-17 01:13:43.025121 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-17 01:13:43.025129 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-17 01:13:43.025155 | orchestrator | 2025-05-17 01:13:43.025164 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-17 01:13:43.025172 | orchestrator | Saturday 17 May 2025 01:08:07 +0000 (0:00:03.300) 0:02:32.876 ********** 2025-05-17 01:13:43.025186 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-17 01:13:43.025195 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-17 01:13:43.025203 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-17 01:13:43.025211 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-17 01:13:43.025219 | orchestrator | 2025-05-17 01:13:43.025227 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-17 01:13:43.025234 | orchestrator | Saturday 17 May 2025 01:08:14 +0000 (0:00:06.545) 0:02:39.421 ********** 2025-05-17 01:13:43.025242 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-17 01:13:43.025250 | orchestrator | 2025-05-17 01:13:43.025258 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-17 01:13:43.025265 | orchestrator | Saturday 17 May 2025 01:08:17 +0000 (0:00:03.231) 0:02:42.653 ********** 2025-05-17 01:13:43.025273 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-17 01:13:43.025281 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-17 01:13:43.025289 | orchestrator | 2025-05-17 01:13:43.025299 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-17 01:13:43.025314 | orchestrator | Saturday 17 May 2025 01:08:20 +0000 (0:00:03.273) 0:02:45.927 ********** 2025-05-17 01:13:43.025328 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-17 01:13:43.025337 | orchestrator | 2025-05-17 01:13:43.025346 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-17 01:13:43.025353 | orchestrator | Saturday 17 May 2025 01:08:24 +0000 (0:00:03.391) 0:02:49.318 ********** 2025-05-17 01:13:43.025361 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-17 01:13:43.025369 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-17 01:13:43.025376 | orchestrator | 2025-05-17 01:13:43.025384 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-17 01:13:43.025400 | orchestrator | Saturday 17 May 2025 01:08:32 +0000 (0:00:08.048) 0:02:57.366 ********** 2025-05-17 01:13:43.025414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 01:13:43.025476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 01:13:43.025525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.025545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 01:13:43.025554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.025564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.025578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.025590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.025599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.025607 | orchestrator | 2025-05-17 01:13:43.025615 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-17 01:13:43.025627 | orchestrator | Saturday 17 May 2025 01:08:33 +0000 (0:00:01.399) 0:02:58.766 ********** 2025-05-17 01:13:43.025641 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.025655 | orchestrator | 2025-05-17 01:13:43.025665 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-17 01:13:43.025673 | orchestrator | Saturday 17 May 2025 01:08:33 +0000 (0:00:00.247) 0:02:59.013 ********** 2025-05-17 01:13:43.025681 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.025688 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.025696 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.025704 | orchestrator | 2025-05-17 01:13:43.025712 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-17 01:13:43.025719 | orchestrator | Saturday 17 May 2025 01:08:34 +0000 (0:00:00.272) 0:02:59.285 ********** 2025-05-17 01:13:43.025727 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-17 01:13:43.025735 | orchestrator | 2025-05-17 01:13:43.025748 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-17 01:13:43.025757 | orchestrator | Saturday 17 May 2025 01:08:34 +0000 (0:00:00.509) 0:02:59.795 ********** 2025-05-17 01:13:43.025765 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.025773 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.025781 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.025789 | orchestrator | 2025-05-17 01:13:43.025797 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-17 01:13:43.025804 | orchestrator | Saturday 17 May 2025 01:08:34 +0000 (0:00:00.288) 0:03:00.083 ********** 2025-05-17 01:13:43.025812 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:13:43.025820 | orchestrator | 2025-05-17 01:13:43.025828 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-17 01:13:43.025846 | orchestrator | Saturday 17 May 2025 01:08:35 +0000 (0:00:00.786) 0:03:00.869 ********** 2025-05-17 01:13:43.025855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 01:13:43.025868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 01:13:43.025884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 01:13:43.025894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.025914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.025923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.025931 | orchestrator | 2025-05-17 01:13:43.025939 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-17 01:13:43.025947 | orchestrator | Saturday 17 May 2025 01:08:38 +0000 (0:00:02.473) 0:03:03.343 ********** 2025-05-17 01:13:43.025959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-17 01:13:43.025968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.025981 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.025990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-17 01:13:43.026004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.026012 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.026097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-17 01:13:43.026112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.026125 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.026138 | orchestrator | 2025-05-17 01:13:43.026152 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-17 01:13:43.026166 | orchestrator | Saturday 17 May 2025 01:08:38 +0000 (0:00:00.597) 0:03:03.940 ********** 2025-05-17 01:13:43.026518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-17 01:13:43.026612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.026622 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.026642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-17 01:13:43.026651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.026658 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.026678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-17 01:13:43.026691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.026699 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.026706 | orchestrator | 2025-05-17 01:13:43.026714 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-17 01:13:43.026722 | orchestrator | Saturday 17 May 2025 01:08:39 +0000 (0:00:01.118) 0:03:05.059 ********** 2025-05-17 01:13:43.026733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 01:13:43.026741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 01:13:43.026759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 01:13:43.026768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.026775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.026787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.026794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.026810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.026819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.026826 | orchestrator | 2025-05-17 01:13:43.026834 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-17 01:13:43.026841 | orchestrator | Saturday 17 May 2025 01:08:42 +0000 (0:00:02.495) 0:03:07.555 ********** 2025-05-17 01:13:43.026849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 01:13:43.026860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 01:13:43.026878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 01:13:43.026886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.026895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.026903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.026914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.026922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.026938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.026945 | orchestrator | 2025-05-17 01:13:43.026952 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-17 01:13:43.026960 | orchestrator | Saturday 17 May 2025 01:08:47 +0000 (0:00:05.009) 0:03:12.564 ********** 2025-05-17 01:13:43.026967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-17 01:13:43.026975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.026986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.026994 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.027002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-17 01:13:43.027019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.027027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.027035 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.027045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-17 01:13:43.027058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.027075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.027083 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.027090 | orchestrator | 2025-05-17 01:13:43.027097 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-17 01:13:43.027105 | orchestrator | Saturday 17 May 2025 01:08:47 +0000 (0:00:00.626) 0:03:13.191 ********** 2025-05-17 01:13:43.027113 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.027122 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:13:43.027132 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:13:43.027141 | orchestrator | 2025-05-17 01:13:43.027148 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-17 01:13:43.027155 | orchestrator | Saturday 17 May 2025 01:08:49 +0000 (0:00:01.421) 0:03:14.613 ********** 2025-05-17 01:13:43.027166 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.027173 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.027181 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.027189 | orchestrator | 2025-05-17 01:13:43.027199 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-17 01:13:43.027207 | orchestrator | Saturday 17 May 2025 01:08:49 +0000 (0:00:00.347) 0:03:14.960 ********** 2025-05-17 01:13:43.027215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 01:13:43.027227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 01:13:43.027242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-17 01:13:43.027755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.027793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.027802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.027811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.027842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.027854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.027868 | orchestrator | 2025-05-17 01:13:43.027882 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-17 01:13:43.027895 | orchestrator | Saturday 17 May 2025 01:08:51 +0000 (0:00:01.923) 0:03:16.884 ********** 2025-05-17 01:13:43.027903 | orchestrator | 2025-05-17 01:13:43.027911 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-17 01:13:43.027919 | orchestrator | Saturday 17 May 2025 01:08:51 +0000 (0:00:00.316) 0:03:17.201 ********** 2025-05-17 01:13:43.027927 | orchestrator | 2025-05-17 01:13:43.027936 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-17 01:13:43.027944 | orchestrator | Saturday 17 May 2025 01:08:52 +0000 (0:00:00.111) 0:03:17.313 ********** 2025-05-17 01:13:43.027952 | orchestrator | 2025-05-17 01:13:43.027960 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-17 01:13:43.027976 | orchestrator | Saturday 17 May 2025 01:08:52 +0000 (0:00:00.259) 0:03:17.572 ********** 2025-05-17 01:13:43.027995 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.028004 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:13:43.028013 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:13:43.028021 | orchestrator | 2025-05-17 01:13:43.028029 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-17 01:13:43.028037 | orchestrator | Saturday 17 May 2025 01:09:15 +0000 (0:00:23.476) 0:03:41.048 ********** 2025-05-17 01:13:43.028048 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.028062 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:13:43.028166 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:13:43.028175 | orchestrator | 2025-05-17 01:13:43.028184 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-17 01:13:43.028191 | orchestrator | 2025-05-17 01:13:43.028199 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-17 01:13:43.028207 | orchestrator | Saturday 17 May 2025 01:09:26 +0000 (0:00:10.452) 0:03:51.500 ********** 2025-05-17 01:13:43.028216 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:13:43.028224 | orchestrator | 2025-05-17 01:13:43.028232 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-17 01:13:43.028242 | orchestrator | Saturday 17 May 2025 01:09:27 +0000 (0:00:01.302) 0:03:52.803 ********** 2025-05-17 01:13:43.028261 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.028270 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.028280 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.028290 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.028299 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.028309 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.028318 | orchestrator | 2025-05-17 01:13:43.028327 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-17 01:13:43.028336 | orchestrator | Saturday 17 May 2025 01:09:28 +0000 (0:00:00.687) 0:03:53.491 ********** 2025-05-17 01:13:43.028346 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.028355 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.028365 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.028374 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 01:13:43.028383 | orchestrator | 2025-05-17 01:13:43.028393 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-17 01:13:43.028403 | orchestrator | Saturday 17 May 2025 01:09:29 +0000 (0:00:01.053) 0:03:54.544 ********** 2025-05-17 01:13:43.028412 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-17 01:13:43.028422 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-17 01:13:43.028432 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-17 01:13:43.028441 | orchestrator | 2025-05-17 01:13:43.028450 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-17 01:13:43.028460 | orchestrator | Saturday 17 May 2025 01:09:30 +0000 (0:00:00.813) 0:03:55.358 ********** 2025-05-17 01:13:43.028469 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-17 01:13:43.028478 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-17 01:13:43.028511 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-17 01:13:43.028521 | orchestrator | 2025-05-17 01:13:43.028530 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-17 01:13:43.028541 | orchestrator | Saturday 17 May 2025 01:09:31 +0000 (0:00:01.325) 0:03:56.684 ********** 2025-05-17 01:13:43.028550 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-17 01:13:43.028599 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.028615 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-17 01:13:43.028625 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.028633 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-17 01:13:43.028641 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.028649 | orchestrator | 2025-05-17 01:13:43.028658 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-17 01:13:43.028666 | orchestrator | Saturday 17 May 2025 01:09:32 +0000 (0:00:00.613) 0:03:57.298 ********** 2025-05-17 01:13:43.028675 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-17 01:13:43.028707 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-17 01:13:43.028715 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.028723 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-17 01:13:43.028731 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-17 01:13:43.028783 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-17 01:13:43.028793 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-17 01:13:43.028801 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-17 01:13:43.028810 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.028818 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-17 01:13:43.028826 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-17 01:13:43.028842 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.028850 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-17 01:13:43.028939 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-17 01:13:43.028950 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-17 01:13:43.028960 | orchestrator | 2025-05-17 01:13:43.028984 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-17 01:13:43.028998 | orchestrator | Saturday 17 May 2025 01:09:33 +0000 (0:00:01.246) 0:03:58.544 ********** 2025-05-17 01:13:43.029011 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.029024 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.029035 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.029048 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:13:43.029060 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:13:43.029072 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:13:43.029084 | orchestrator | 2025-05-17 01:13:43.029096 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-17 01:13:43.029109 | orchestrator | Saturday 17 May 2025 01:09:34 +0000 (0:00:01.156) 0:03:59.701 ********** 2025-05-17 01:13:43.029123 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.029137 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.029152 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.029165 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:13:43.029178 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:13:43.029190 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:13:43.029203 | orchestrator | 2025-05-17 01:13:43.029216 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-17 01:13:43.029230 | orchestrator | Saturday 17 May 2025 01:09:36 +0000 (0:00:01.895) 0:04:01.597 ********** 2025-05-17 01:13:43.029245 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-17 01:13:43.029271 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-17 01:13:43.029285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.029342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.029360 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-17 01:13:43.029428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-17 01:13:43.029448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.029471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.029524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.029638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.029670 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-17 01:13:43.029688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.029705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.029722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.029746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.029775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.029791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.029817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.029850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.029887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-17 01:13:43.029920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.030011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.030078 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.030111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.030120 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.030130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.030141 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.030227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.030256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.030266 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.030297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.030323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.030342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.030351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.030373 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.030397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.030417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.030426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.030455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.030464 | orchestrator | 2025-05-17 01:13:43.030474 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-17 01:13:43.030544 | orchestrator | Saturday 17 May 2025 01:09:38 +0000 (0:00:02.458) 0:04:04.055 ********** 2025-05-17 01:13:43.030558 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-17 01:13:43.030569 | orchestrator | 2025-05-17 01:13:43.030578 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-17 01:13:43.030586 | orchestrator | Saturday 17 May 2025 01:09:40 +0000 (0:00:01.389) 0:04:05.445 ********** 2025-05-17 01:13:43.030603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030615 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030634 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030675 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030713 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030748 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030773 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030790 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.030800 | orchestrator | 2025-05-17 01:13:43.030809 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-17 01:13:43.030818 | orchestrator | Saturday 17 May 2025 01:09:44 +0000 (0:00:03.971) 0:04:09.416 ********** 2025-05-17 01:13:43.030835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.030845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.030861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.030871 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.030881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.030897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.030912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.030922 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.030932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.030947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.030956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.030971 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.030980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.030989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.030999 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.031012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.031021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.031031 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.031046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.031055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.031070 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.031079 | orchestrator | 2025-05-17 01:13:43.031088 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-17 01:13:43.031097 | orchestrator | Saturday 17 May 2025 01:09:45 +0000 (0:00:01.789) 0:04:11.206 ********** 2025-05-17 01:13:43.031106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.031116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.031130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.031139 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.032160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.032213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.032224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.032234 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.032244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.032262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.032272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.032281 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.032301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.032322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.032338 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.032354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.032367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.032376 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.032390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.032399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.032408 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.032417 | orchestrator | 2025-05-17 01:13:43.032426 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-17 01:13:43.032435 | orchestrator | Saturday 17 May 2025 01:09:48 +0000 (0:00:02.384) 0:04:13.590 ********** 2025-05-17 01:13:43.032450 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.032459 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.032468 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.032477 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-17 01:13:43.032553 | orchestrator | 2025-05-17 01:13:43.032570 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-17 01:13:43.032586 | orchestrator | Saturday 17 May 2025 01:09:49 +0000 (0:00:01.089) 0:04:14.680 ********** 2025-05-17 01:13:43.032609 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-17 01:13:43.032619 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-17 01:13:43.032627 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-17 01:13:43.032635 | orchestrator | 2025-05-17 01:13:43.032643 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-17 01:13:43.032650 | orchestrator | Saturday 17 May 2025 01:09:50 +0000 (0:00:00.756) 0:04:15.436 ********** 2025-05-17 01:13:43.032658 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-17 01:13:43.032666 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-17 01:13:43.032674 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-17 01:13:43.032681 | orchestrator | 2025-05-17 01:13:43.032689 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-17 01:13:43.032697 | orchestrator | Saturday 17 May 2025 01:09:50 +0000 (0:00:00.755) 0:04:16.192 ********** 2025-05-17 01:13:43.032705 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:13:43.032713 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:13:43.032723 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:13:43.032737 | orchestrator | 2025-05-17 01:13:43.032751 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-17 01:13:43.032762 | orchestrator | Saturday 17 May 2025 01:09:51 +0000 (0:00:00.651) 0:04:16.843 ********** 2025-05-17 01:13:43.032772 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:13:43.032781 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:13:43.032792 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:13:43.032801 | orchestrator | 2025-05-17 01:13:43.032810 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-17 01:13:43.032819 | orchestrator | Saturday 17 May 2025 01:09:52 +0000 (0:00:00.466) 0:04:17.309 ********** 2025-05-17 01:13:43.032829 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-17 01:13:43.032838 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-17 01:13:43.032848 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-17 01:13:43.032857 | orchestrator | 2025-05-17 01:13:43.032866 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-17 01:13:43.032876 | orchestrator | Saturday 17 May 2025 01:09:53 +0000 (0:00:01.417) 0:04:18.727 ********** 2025-05-17 01:13:43.032885 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-17 01:13:43.032895 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-17 01:13:43.032904 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-17 01:13:43.032913 | orchestrator | 2025-05-17 01:13:43.032923 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-17 01:13:43.032932 | orchestrator | Saturday 17 May 2025 01:09:54 +0000 (0:00:01.357) 0:04:20.085 ********** 2025-05-17 01:13:43.032941 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-17 01:13:43.032951 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-17 01:13:43.032964 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-17 01:13:43.032977 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-17 01:13:43.032990 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-17 01:13:43.033002 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-17 01:13:43.033010 | orchestrator | 2025-05-17 01:13:43.033026 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-17 01:13:43.033034 | orchestrator | Saturday 17 May 2025 01:09:59 +0000 (0:00:04.850) 0:04:24.935 ********** 2025-05-17 01:13:43.033041 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.033050 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.033058 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.033065 | orchestrator | 2025-05-17 01:13:43.033073 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-17 01:13:43.033081 | orchestrator | Saturday 17 May 2025 01:10:00 +0000 (0:00:00.331) 0:04:25.267 ********** 2025-05-17 01:13:43.033094 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.033102 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.033110 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.033118 | orchestrator | 2025-05-17 01:13:43.033126 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-17 01:13:43.033134 | orchestrator | Saturday 17 May 2025 01:10:00 +0000 (0:00:00.337) 0:04:25.605 ********** 2025-05-17 01:13:43.033142 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:13:43.033149 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:13:43.033157 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:13:43.033165 | orchestrator | 2025-05-17 01:13:43.033173 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-17 01:13:43.033181 | orchestrator | Saturday 17 May 2025 01:10:01 +0000 (0:00:01.185) 0:04:26.791 ********** 2025-05-17 01:13:43.033189 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-17 01:13:43.033199 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-17 01:13:43.033213 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-17 01:13:43.033228 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-17 01:13:43.033239 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-17 01:13:43.033247 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-17 01:13:43.033255 | orchestrator | 2025-05-17 01:13:43.033263 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-17 01:13:43.033276 | orchestrator | Saturday 17 May 2025 01:10:05 +0000 (0:00:03.555) 0:04:30.346 ********** 2025-05-17 01:13:43.033285 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-17 01:13:43.033293 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-17 01:13:43.033300 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-17 01:13:43.033308 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-17 01:13:43.033316 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:13:43.033324 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-17 01:13:43.033332 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:13:43.033340 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-17 01:13:43.033347 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:13:43.033355 | orchestrator | 2025-05-17 01:13:43.033363 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-17 01:13:43.033371 | orchestrator | Saturday 17 May 2025 01:10:08 +0000 (0:00:03.310) 0:04:33.657 ********** 2025-05-17 01:13:43.033379 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.033388 | orchestrator | 2025-05-17 01:13:43.033402 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-17 01:13:43.033415 | orchestrator | Saturday 17 May 2025 01:10:08 +0000 (0:00:00.139) 0:04:33.796 ********** 2025-05-17 01:13:43.033432 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.033440 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.033448 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.033456 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.033464 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.033471 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.033479 | orchestrator | 2025-05-17 01:13:43.033508 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-17 01:13:43.033516 | orchestrator | Saturday 17 May 2025 01:10:09 +0000 (0:00:00.841) 0:04:34.638 ********** 2025-05-17 01:13:43.033524 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-17 01:13:43.033532 | orchestrator | 2025-05-17 01:13:43.033540 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-17 01:13:43.033547 | orchestrator | Saturday 17 May 2025 01:10:09 +0000 (0:00:00.381) 0:04:35.019 ********** 2025-05-17 01:13:43.033555 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.033563 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.033571 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.033580 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.033587 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.033595 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.033603 | orchestrator | 2025-05-17 01:13:43.033611 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-17 01:13:43.033620 | orchestrator | Saturday 17 May 2025 01:10:10 +0000 (0:00:00.732) 0:04:35.751 ********** 2025-05-17 01:13:43.033637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.033647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.033661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.033675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.033684 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-17 01:13:43.033693 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-17 01:13:43.033705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.033713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.033727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-17 01:13:43.033741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-17 01:13:43.033750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.033760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.033772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-17 01:13:43.033781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.033793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.033807 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-17 01:13:43.033816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.033824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-17 01:13:43.033832 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-17 01:13:43.033845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.033854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.033873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.033882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.033890 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.033898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.033907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.033919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.033928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.033941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-17 01:13:43.033954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.033962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.033971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.033979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.033991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.034000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.034108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.034140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.034177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034185 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.034198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.034246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034255 | orchestrator | 2025-05-17 01:13:43.034263 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-17 01:13:43.034276 | orchestrator | Saturday 17 May 2025 01:10:14 +0000 (0:00:04.203) 0:04:39.955 ********** 2025-05-17 01:13:43.034290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.034304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.034317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.034331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.034340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.034361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.034370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.034379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.034387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.034399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.034413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.034445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.034453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.034465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.034502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.034520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.034544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.034552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.034561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.034579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.034588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.034601 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.034610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-17 01:13:43.034631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.034645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.034654 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.034667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-17 01:13:43.034676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.034696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.034725 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.034734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-17 01:13:43.034757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.034766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.034774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.034789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.034824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.034860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.034881 | orchestrator | 2025-05-17 01:13:43.034889 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-17 01:13:43.034897 | orchestrator | Saturday 17 May 2025 01:10:21 +0000 (0:00:06.807) 0:04:46.763 ********** 2025-05-17 01:13:43.034905 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.034913 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.034920 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.034928 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.034936 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.034943 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.034951 | orchestrator | 2025-05-17 01:13:43.034959 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-17 01:13:43.034967 | orchestrator | Saturday 17 May 2025 01:10:23 +0000 (0:00:01.707) 0:04:48.471 ********** 2025-05-17 01:13:43.034975 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-17 01:13:43.034983 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-17 01:13:43.034991 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-17 01:13:43.034999 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.035007 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-17 01:13:43.035019 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-17 01:13:43.035027 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.035036 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-17 01:13:43.035043 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-17 01:13:43.035051 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.035059 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-17 01:13:43.035067 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-17 01:13:43.035074 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-17 01:13:43.035082 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-17 01:13:43.035096 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-17 01:13:43.035104 | orchestrator | 2025-05-17 01:13:43.035112 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-17 01:13:43.035120 | orchestrator | Saturday 17 May 2025 01:10:28 +0000 (0:00:05.090) 0:04:53.561 ********** 2025-05-17 01:13:43.035128 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.035136 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.035143 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.035151 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.035159 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.035167 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.035175 | orchestrator | 2025-05-17 01:13:43.035182 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-17 01:13:43.035190 | orchestrator | Saturday 17 May 2025 01:10:29 +0000 (0:00:00.832) 0:04:54.394 ********** 2025-05-17 01:13:43.035198 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-17 01:13:43.035206 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-17 01:13:43.035214 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-17 01:13:43.035222 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-17 01:13:43.035229 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-17 01:13:43.035237 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-17 01:13:43.035245 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-17 01:13:43.035253 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-17 01:13:43.035261 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-17 01:13:43.035272 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-17 01:13:43.035281 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.035289 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-17 01:13:43.035296 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.035304 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-17 01:13:43.035312 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.035319 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-17 01:13:43.035327 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-17 01:13:43.035335 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-17 01:13:43.035343 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-17 01:13:43.035351 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-17 01:13:43.035359 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-17 01:13:43.035366 | orchestrator | 2025-05-17 01:13:43.035374 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-17 01:13:43.035387 | orchestrator | Saturday 17 May 2025 01:10:35 +0000 (0:00:06.781) 0:05:01.176 ********** 2025-05-17 01:13:43.035396 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-17 01:13:43.035404 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-17 01:13:43.035416 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-17 01:13:43.035424 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-17 01:13:43.035432 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-17 01:13:43.035440 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-17 01:13:43.035448 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-17 01:13:43.035455 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-17 01:13:43.035463 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-17 01:13:43.035471 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-17 01:13:43.035479 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-17 01:13:43.035548 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-17 01:13:43.035563 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.035576 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-17 01:13:43.035584 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-17 01:13:43.035592 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.035600 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-17 01:13:43.035608 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.035616 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-17 01:13:43.035623 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-17 01:13:43.035631 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-17 01:13:43.035639 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-17 01:13:43.035647 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-17 01:13:43.035654 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-17 01:13:43.035662 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-17 01:13:43.035670 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-17 01:13:43.035678 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-17 01:13:43.035686 | orchestrator | 2025-05-17 01:13:43.035694 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-17 01:13:43.035701 | orchestrator | Saturday 17 May 2025 01:10:46 +0000 (0:00:10.140) 0:05:11.316 ********** 2025-05-17 01:13:43.035709 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.035717 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.035725 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.035733 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.035740 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.035748 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.035756 | orchestrator | 2025-05-17 01:13:43.035764 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-17 01:13:43.035777 | orchestrator | Saturday 17 May 2025 01:10:46 +0000 (0:00:00.695) 0:05:12.012 ********** 2025-05-17 01:13:43.035792 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.035800 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.035807 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.035814 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.035820 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.035827 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.035833 | orchestrator | 2025-05-17 01:13:43.035840 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-17 01:13:43.035847 | orchestrator | Saturday 17 May 2025 01:10:47 +0000 (0:00:00.859) 0:05:12.871 ********** 2025-05-17 01:13:43.035854 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.035860 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.035867 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.035873 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:13:43.035880 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:13:43.035887 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:13:43.035894 | orchestrator | 2025-05-17 01:13:43.035900 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-17 01:13:43.035907 | orchestrator | Saturday 17 May 2025 01:10:50 +0000 (0:00:02.640) 0:05:15.512 ********** 2025-05-17 01:13:43.035921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.035935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.035947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.035958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.035971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.035983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.035991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036013 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.036020 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.036027 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.036039 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.036051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.036058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.036069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036096 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.036103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.036114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.036121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.036134 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.036141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.036148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036180 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.036193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.036212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.036220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.036227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.036242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.036253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.036261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.036268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.036286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.036306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.036323 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.036330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib2025-05-17 01:13:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:43.036415 | orchestrator | /modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.036439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.036454 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.036464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.036472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.036504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.036517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036545 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.036552 | orchestrator | 2025-05-17 01:13:43.036558 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-17 01:13:43.036565 | orchestrator | Saturday 17 May 2025 01:10:51 +0000 (0:00:01.677) 0:05:17.189 ********** 2025-05-17 01:13:43.036576 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-17 01:13:43.036588 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-17 01:13:43.036600 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.036611 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-17 01:13:43.036618 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-17 01:13:43.036625 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.036632 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-17 01:13:43.036639 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-17 01:13:43.036645 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.036652 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-17 01:13:43.036659 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-17 01:13:43.036665 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.036672 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-17 01:13:43.036679 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-17 01:13:43.036685 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.036692 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-17 01:13:43.036698 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-17 01:13:43.036705 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.036712 | orchestrator | 2025-05-17 01:13:43.036718 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-17 01:13:43.036725 | orchestrator | Saturday 17 May 2025 01:10:52 +0000 (0:00:00.990) 0:05:18.180 ********** 2025-05-17 01:13:43.036737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.036750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.036758 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-17 01:13:43.036775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.036789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.036800 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-17 01:13:43.036812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-17 01:13:43.036820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-17 01:13:43.036827 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-17 01:13:43.036837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-17 01:13:43.036845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.036859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.036867 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-17 01:13:43.036874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.036881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.036888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.036899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036907 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-17 01:13:43.036925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.036933 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.036940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.036947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.036954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-17 01:13:43.036967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.036974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.036990 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-17 01:13:43.037009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-17 01:13:43.037018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.037026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.037033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.037044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-17 01:13:43.037056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-17 01:13:43.037064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.037074 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.037082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.037089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.037099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.037111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.037122 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.037130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.037137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.037144 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.037154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.037167 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.037174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.037190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-17 01:13:43.037203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.037212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-17 01:13:43.037219 | orchestrator | 2025-05-17 01:13:43.037226 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-17 01:13:43.037233 | orchestrator | Saturday 17 May 2025 01:10:56 +0000 (0:00:03.176) 0:05:21.356 ********** 2025-05-17 01:13:43.037240 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.037247 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.037259 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.037266 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.037272 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.037279 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.037285 | orchestrator | 2025-05-17 01:13:43.037292 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-17 01:13:43.037302 | orchestrator | Saturday 17 May 2025 01:10:56 +0000 (0:00:00.853) 0:05:22.210 ********** 2025-05-17 01:13:43.037309 | orchestrator | 2025-05-17 01:13:43.037316 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-17 01:13:43.037323 | orchestrator | Saturday 17 May 2025 01:10:57 +0000 (0:00:00.121) 0:05:22.332 ********** 2025-05-17 01:13:43.037329 | orchestrator | 2025-05-17 01:13:43.037336 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-17 01:13:43.037343 | orchestrator | Saturday 17 May 2025 01:10:57 +0000 (0:00:00.282) 0:05:22.614 ********** 2025-05-17 01:13:43.037349 | orchestrator | 2025-05-17 01:13:43.037356 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-17 01:13:43.037367 | orchestrator | Saturday 17 May 2025 01:10:57 +0000 (0:00:00.107) 0:05:22.722 ********** 2025-05-17 01:13:43.037379 | orchestrator | 2025-05-17 01:13:43.037389 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-17 01:13:43.037397 | orchestrator | Saturday 17 May 2025 01:10:57 +0000 (0:00:00.281) 0:05:23.003 ********** 2025-05-17 01:13:43.037403 | orchestrator | 2025-05-17 01:13:43.037410 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-17 01:13:43.037417 | orchestrator | Saturday 17 May 2025 01:10:57 +0000 (0:00:00.107) 0:05:23.110 ********** 2025-05-17 01:13:43.037424 | orchestrator | 2025-05-17 01:13:43.037430 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-17 01:13:43.037437 | orchestrator | Saturday 17 May 2025 01:10:58 +0000 (0:00:00.277) 0:05:23.388 ********** 2025-05-17 01:13:43.037443 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.037450 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:13:43.037457 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:13:43.037463 | orchestrator | 2025-05-17 01:13:43.037470 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-17 01:13:43.037477 | orchestrator | Saturday 17 May 2025 01:11:05 +0000 (0:00:07.059) 0:05:30.448 ********** 2025-05-17 01:13:43.037501 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.037508 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:13:43.037515 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:13:43.037522 | orchestrator | 2025-05-17 01:13:43.037528 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-17 01:13:43.037535 | orchestrator | Saturday 17 May 2025 01:11:20 +0000 (0:00:15.535) 0:05:45.983 ********** 2025-05-17 01:13:43.037547 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:13:43.037560 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:13:43.037570 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:13:43.037580 | orchestrator | 2025-05-17 01:13:43.037587 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-17 01:13:43.037593 | orchestrator | Saturday 17 May 2025 01:11:43 +0000 (0:00:23.121) 0:06:09.105 ********** 2025-05-17 01:13:43.037600 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:13:43.037607 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:13:43.037613 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:13:43.037620 | orchestrator | 2025-05-17 01:13:43.037626 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-17 01:13:43.037633 | orchestrator | Saturday 17 May 2025 01:12:12 +0000 (0:00:29.095) 0:06:38.201 ********** 2025-05-17 01:13:43.037639 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:13:43.037646 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:13:43.037653 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:13:43.037659 | orchestrator | 2025-05-17 01:13:43.037666 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-17 01:13:43.037678 | orchestrator | Saturday 17 May 2025 01:12:13 +0000 (0:00:00.770) 0:06:38.971 ********** 2025-05-17 01:13:43.037684 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:13:43.037691 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:13:43.037698 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:13:43.037704 | orchestrator | 2025-05-17 01:13:43.037711 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-17 01:13:43.037718 | orchestrator | Saturday 17 May 2025 01:12:14 +0000 (0:00:00.987) 0:06:39.958 ********** 2025-05-17 01:13:43.037724 | orchestrator | changed: [testbed-node-5] 2025-05-17 01:13:43.037731 | orchestrator | changed: [testbed-node-4] 2025-05-17 01:13:43.037741 | orchestrator | changed: [testbed-node-3] 2025-05-17 01:13:43.037752 | orchestrator | 2025-05-17 01:13:43.037764 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-17 01:13:43.037774 | orchestrator | Saturday 17 May 2025 01:12:37 +0000 (0:00:22.776) 0:07:02.734 ********** 2025-05-17 01:13:43.037781 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.037787 | orchestrator | 2025-05-17 01:13:43.037794 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-17 01:13:43.037800 | orchestrator | Saturday 17 May 2025 01:12:37 +0000 (0:00:00.112) 0:07:02.847 ********** 2025-05-17 01:13:43.037807 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.037814 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.037821 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.037828 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.037834 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.037841 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-17 01:13:43.037848 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-17 01:13:43.037854 | orchestrator | 2025-05-17 01:13:43.037861 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-17 01:13:43.037867 | orchestrator | Saturday 17 May 2025 01:12:59 +0000 (0:00:21.944) 0:07:24.791 ********** 2025-05-17 01:13:43.037874 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.037881 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.037887 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.037894 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.037901 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.037913 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.037925 | orchestrator | 2025-05-17 01:13:43.037935 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-17 01:13:43.037947 | orchestrator | Saturday 17 May 2025 01:13:08 +0000 (0:00:09.203) 0:07:33.994 ********** 2025-05-17 01:13:43.037954 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.037961 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.037967 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.037974 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.037981 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.037987 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-05-17 01:13:43.037994 | orchestrator | 2025-05-17 01:13:43.038001 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-17 01:13:43.038007 | orchestrator | Saturday 17 May 2025 01:13:11 +0000 (0:00:02.874) 0:07:36.869 ********** 2025-05-17 01:13:43.038043 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-17 01:13:43.038052 | orchestrator | 2025-05-17 01:13:43.038059 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-17 01:13:43.038066 | orchestrator | Saturday 17 May 2025 01:13:21 +0000 (0:00:10.027) 0:07:46.897 ********** 2025-05-17 01:13:43.038072 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-17 01:13:43.038079 | orchestrator | 2025-05-17 01:13:43.038095 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-17 01:13:43.038107 | orchestrator | Saturday 17 May 2025 01:13:22 +0000 (0:00:01.111) 0:07:48.008 ********** 2025-05-17 01:13:43.038118 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.038125 | orchestrator | 2025-05-17 01:13:43.038132 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-17 01:13:43.038139 | orchestrator | Saturday 17 May 2025 01:13:23 +0000 (0:00:01.090) 0:07:49.099 ********** 2025-05-17 01:13:43.038145 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-17 01:13:43.038152 | orchestrator | 2025-05-17 01:13:43.038159 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-17 01:13:43.038166 | orchestrator | Saturday 17 May 2025 01:13:33 +0000 (0:00:09.334) 0:07:58.434 ********** 2025-05-17 01:13:43.038173 | orchestrator | ok: [testbed-node-3] 2025-05-17 01:13:43.038180 | orchestrator | ok: [testbed-node-4] 2025-05-17 01:13:43.038186 | orchestrator | ok: [testbed-node-5] 2025-05-17 01:13:43.038193 | orchestrator | ok: [testbed-node-0] 2025-05-17 01:13:43.038200 | orchestrator | ok: [testbed-node-1] 2025-05-17 01:13:43.038207 | orchestrator | ok: [testbed-node-2] 2025-05-17 01:13:43.038213 | orchestrator | 2025-05-17 01:13:43.038225 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-17 01:13:43.038232 | orchestrator | 2025-05-17 01:13:43.038239 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-17 01:13:43.038246 | orchestrator | Saturday 17 May 2025 01:13:35 +0000 (0:00:02.018) 0:08:00.453 ********** 2025-05-17 01:13:43.038253 | orchestrator | changed: [testbed-node-0] 2025-05-17 01:13:43.038260 | orchestrator | changed: [testbed-node-1] 2025-05-17 01:13:43.038267 | orchestrator | changed: [testbed-node-2] 2025-05-17 01:13:43.038274 | orchestrator | 2025-05-17 01:13:43.038280 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-17 01:13:43.038287 | orchestrator | 2025-05-17 01:13:43.038297 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-17 01:13:43.038309 | orchestrator | Saturday 17 May 2025 01:13:36 +0000 (0:00:00.992) 0:08:01.445 ********** 2025-05-17 01:13:43.038320 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.038327 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.038334 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.038340 | orchestrator | 2025-05-17 01:13:43.038347 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-17 01:13:43.038354 | orchestrator | 2025-05-17 01:13:43.038360 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-17 01:13:43.038367 | orchestrator | Saturday 17 May 2025 01:13:36 +0000 (0:00:00.760) 0:08:02.205 ********** 2025-05-17 01:13:43.038374 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-17 01:13:43.038380 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-17 01:13:43.038387 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-17 01:13:43.038393 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-17 01:13:43.038400 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-17 01:13:43.038407 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-17 01:13:43.038414 | orchestrator | skipping: [testbed-node-3] 2025-05-17 01:13:43.038421 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-17 01:13:43.038428 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-17 01:13:43.038434 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-17 01:13:43.038441 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-17 01:13:43.038448 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-17 01:13:43.038454 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-17 01:13:43.038461 | orchestrator | skipping: [testbed-node-4] 2025-05-17 01:13:43.038478 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-17 01:13:43.038538 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-17 01:13:43.038550 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-17 01:13:43.038560 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-17 01:13:43.038571 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-17 01:13:43.038578 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-17 01:13:43.038585 | orchestrator | skipping: [testbed-node-5] 2025-05-17 01:13:43.038591 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-17 01:13:43.038598 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-17 01:13:43.038605 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-17 01:13:43.038612 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-17 01:13:43.038623 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-17 01:13:43.038629 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-17 01:13:43.038636 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.038643 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-17 01:13:43.038650 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-17 01:13:43.038656 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-17 01:13:43.038663 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-17 01:13:43.038669 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-17 01:13:43.038676 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-17 01:13:43.038683 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.038689 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-17 01:13:43.038696 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-17 01:13:43.038702 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-17 01:13:43.038709 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-17 01:13:43.038717 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-17 01:13:43.038729 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-17 01:13:43.038740 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.038749 | orchestrator | 2025-05-17 01:13:43.038756 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-17 01:13:43.038762 | orchestrator | 2025-05-17 01:13:43.038769 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-17 01:13:43.038776 | orchestrator | Saturday 17 May 2025 01:13:38 +0000 (0:00:01.315) 0:08:03.521 ********** 2025-05-17 01:13:43.038782 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-17 01:13:43.038789 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-17 01:13:43.038796 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.038803 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-17 01:13:43.038809 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-17 01:13:43.038816 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.038829 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-17 01:13:43.038836 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-17 01:13:43.038843 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.038849 | orchestrator | 2025-05-17 01:13:43.038856 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-17 01:13:43.038863 | orchestrator | 2025-05-17 01:13:43.038869 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-17 01:13:43.038876 | orchestrator | Saturday 17 May 2025 01:13:39 +0000 (0:00:00.744) 0:08:04.266 ********** 2025-05-17 01:13:43.038882 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.038898 | orchestrator | 2025-05-17 01:13:43.038905 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-17 01:13:43.038912 | orchestrator | 2025-05-17 01:13:43.038921 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-17 01:13:43.038933 | orchestrator | Saturday 17 May 2025 01:13:39 +0000 (0:00:00.886) 0:08:05.152 ********** 2025-05-17 01:13:43.038944 | orchestrator | skipping: [testbed-node-0] 2025-05-17 01:13:43.038952 | orchestrator | skipping: [testbed-node-1] 2025-05-17 01:13:43.038959 | orchestrator | skipping: [testbed-node-2] 2025-05-17 01:13:43.038965 | orchestrator | 2025-05-17 01:13:43.038972 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-17 01:13:43.038979 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-17 01:13:43.038987 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-17 01:13:43.038994 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-17 01:13:43.039001 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-17 01:13:43.039008 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-17 01:13:43.039014 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-17 01:13:43.039021 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-17 01:13:43.039027 | orchestrator | 2025-05-17 01:13:43.039034 | orchestrator | 2025-05-17 01:13:43.039041 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-17 01:13:43.039048 | orchestrator | Saturday 17 May 2025 01:13:40 +0000 (0:00:00.488) 0:08:05.641 ********** 2025-05-17 01:13:43.039054 | orchestrator | =============================================================================== 2025-05-17 01:13:43.039061 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 29.10s 2025-05-17 01:13:43.039068 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 27.67s 2025-05-17 01:13:43.039074 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 23.48s 2025-05-17 01:13:43.039084 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 23.12s 2025-05-17 01:13:43.039091 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.78s 2025-05-17 01:13:43.039097 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.94s 2025-05-17 01:13:43.039103 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.04s 2025-05-17 01:13:43.039109 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 15.54s 2025-05-17 01:13:43.039115 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 15.15s 2025-05-17 01:13:43.039122 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 12.02s 2025-05-17 01:13:43.039128 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.94s 2025-05-17 01:13:43.039134 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.45s 2025-05-17 01:13:43.039141 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.19s 2025-05-17 01:13:43.039147 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 10.14s 2025-05-17 01:13:43.039153 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.03s 2025-05-17 01:13:43.039165 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.02s 2025-05-17 01:13:43.039172 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.33s 2025-05-17 01:13:43.039178 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.20s 2025-05-17 01:13:43.039184 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.59s 2025-05-17 01:13:43.039190 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.05s 2025-05-17 01:13:46.070163 | orchestrator | 2025-05-17 01:13:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:46.070247 | orchestrator | 2025-05-17 01:13:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:49.116973 | orchestrator | 2025-05-17 01:13:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:49.117097 | orchestrator | 2025-05-17 01:13:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:52.167440 | orchestrator | 2025-05-17 01:13:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:52.167611 | orchestrator | 2025-05-17 01:13:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:55.210359 | orchestrator | 2025-05-17 01:13:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:55.210475 | orchestrator | 2025-05-17 01:13:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:13:58.255725 | orchestrator | 2025-05-17 01:13:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:13:58.255833 | orchestrator | 2025-05-17 01:13:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:01.297166 | orchestrator | 2025-05-17 01:14:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:01.297280 | orchestrator | 2025-05-17 01:14:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:04.349391 | orchestrator | 2025-05-17 01:14:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:04.349607 | orchestrator | 2025-05-17 01:14:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:07.392708 | orchestrator | 2025-05-17 01:14:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:07.392826 | orchestrator | 2025-05-17 01:14:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:10.440430 | orchestrator | 2025-05-17 01:14:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:10.440624 | orchestrator | 2025-05-17 01:14:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:13.483129 | orchestrator | 2025-05-17 01:14:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:13.483258 | orchestrator | 2025-05-17 01:14:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:16.533160 | orchestrator | 2025-05-17 01:14:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:16.533275 | orchestrator | 2025-05-17 01:14:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:19.585987 | orchestrator | 2025-05-17 01:14:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:19.586112 | orchestrator | 2025-05-17 01:14:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:22.634814 | orchestrator | 2025-05-17 01:14:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:22.634947 | orchestrator | 2025-05-17 01:14:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:25.681699 | orchestrator | 2025-05-17 01:14:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:25.681815 | orchestrator | 2025-05-17 01:14:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:28.726689 | orchestrator | 2025-05-17 01:14:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:28.726792 | orchestrator | 2025-05-17 01:14:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:31.772223 | orchestrator | 2025-05-17 01:14:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:31.772326 | orchestrator | 2025-05-17 01:14:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:34.816158 | orchestrator | 2025-05-17 01:14:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:34.816275 | orchestrator | 2025-05-17 01:14:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:37.862728 | orchestrator | 2025-05-17 01:14:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:37.862863 | orchestrator | 2025-05-17 01:14:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:40.908803 | orchestrator | 2025-05-17 01:14:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:40.908887 | orchestrator | 2025-05-17 01:14:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:43.961152 | orchestrator | 2025-05-17 01:14:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:43.961294 | orchestrator | 2025-05-17 01:14:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:47.002801 | orchestrator | 2025-05-17 01:14:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:47.002907 | orchestrator | 2025-05-17 01:14:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:50.057062 | orchestrator | 2025-05-17 01:14:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:50.057169 | orchestrator | 2025-05-17 01:14:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:53.116839 | orchestrator | 2025-05-17 01:14:53 | INFO  | Task e89121f8-1ec9-48b4-96ed-10d61d46e8c8 is in state STARTED 2025-05-17 01:14:53.119792 | orchestrator | 2025-05-17 01:14:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:53.119854 | orchestrator | 2025-05-17 01:14:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:56.183861 | orchestrator | 2025-05-17 01:14:56 | INFO  | Task e89121f8-1ec9-48b4-96ed-10d61d46e8c8 is in state STARTED 2025-05-17 01:14:56.185364 | orchestrator | 2025-05-17 01:14:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:56.185442 | orchestrator | 2025-05-17 01:14:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:14:59.245826 | orchestrator | 2025-05-17 01:14:59 | INFO  | Task e89121f8-1ec9-48b4-96ed-10d61d46e8c8 is in state STARTED 2025-05-17 01:14:59.246143 | orchestrator | 2025-05-17 01:14:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:14:59.246398 | orchestrator | 2025-05-17 01:14:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:02.307342 | orchestrator | 2025-05-17 01:15:02 | INFO  | Task e89121f8-1ec9-48b4-96ed-10d61d46e8c8 is in state STARTED 2025-05-17 01:15:02.309606 | orchestrator | 2025-05-17 01:15:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:02.309641 | orchestrator | 2025-05-17 01:15:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:05.363544 | orchestrator | 2025-05-17 01:15:05 | INFO  | Task e89121f8-1ec9-48b4-96ed-10d61d46e8c8 is in state SUCCESS 2025-05-17 01:15:05.365578 | orchestrator | 2025-05-17 01:15:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:05.365618 | orchestrator | 2025-05-17 01:15:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:08.417939 | orchestrator | 2025-05-17 01:15:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:08.418106 | orchestrator | 2025-05-17 01:15:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:11.461035 | orchestrator | 2025-05-17 01:15:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:11.461142 | orchestrator | 2025-05-17 01:15:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:14.521354 | orchestrator | 2025-05-17 01:15:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:14.521460 | orchestrator | 2025-05-17 01:15:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:17.567885 | orchestrator | 2025-05-17 01:15:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:17.568007 | orchestrator | 2025-05-17 01:15:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:20.622863 | orchestrator | 2025-05-17 01:15:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:20.622968 | orchestrator | 2025-05-17 01:15:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:23.671742 | orchestrator | 2025-05-17 01:15:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:23.671851 | orchestrator | 2025-05-17 01:15:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:26.719653 | orchestrator | 2025-05-17 01:15:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:26.719758 | orchestrator | 2025-05-17 01:15:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:29.768252 | orchestrator | 2025-05-17 01:15:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:29.768366 | orchestrator | 2025-05-17 01:15:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:32.815967 | orchestrator | 2025-05-17 01:15:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:32.816098 | orchestrator | 2025-05-17 01:15:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:35.867068 | orchestrator | 2025-05-17 01:15:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:35.867179 | orchestrator | 2025-05-17 01:15:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:38.912220 | orchestrator | 2025-05-17 01:15:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:38.912327 | orchestrator | 2025-05-17 01:15:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:41.954995 | orchestrator | 2025-05-17 01:15:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:41.955108 | orchestrator | 2025-05-17 01:15:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:45.005802 | orchestrator | 2025-05-17 01:15:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:45.005894 | orchestrator | 2025-05-17 01:15:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:48.052781 | orchestrator | 2025-05-17 01:15:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:48.052902 | orchestrator | 2025-05-17 01:15:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:51.102712 | orchestrator | 2025-05-17 01:15:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:51.102794 | orchestrator | 2025-05-17 01:15:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:54.145193 | orchestrator | 2025-05-17 01:15:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:54.145306 | orchestrator | 2025-05-17 01:15:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:15:57.193715 | orchestrator | 2025-05-17 01:15:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:15:57.193823 | orchestrator | 2025-05-17 01:15:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:00.244479 | orchestrator | 2025-05-17 01:16:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:00.244659 | orchestrator | 2025-05-17 01:16:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:03.288212 | orchestrator | 2025-05-17 01:16:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:03.288311 | orchestrator | 2025-05-17 01:16:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:06.338216 | orchestrator | 2025-05-17 01:16:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:06.338326 | orchestrator | 2025-05-17 01:16:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:09.381274 | orchestrator | 2025-05-17 01:16:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:09.381405 | orchestrator | 2025-05-17 01:16:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:12.421997 | orchestrator | 2025-05-17 01:16:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:12.422163 | orchestrator | 2025-05-17 01:16:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:15.472747 | orchestrator | 2025-05-17 01:16:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:15.472849 | orchestrator | 2025-05-17 01:16:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:18.518527 | orchestrator | 2025-05-17 01:16:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:18.518694 | orchestrator | 2025-05-17 01:16:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:21.559089 | orchestrator | 2025-05-17 01:16:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:21.559220 | orchestrator | 2025-05-17 01:16:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:24.604973 | orchestrator | 2025-05-17 01:16:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:24.605083 | orchestrator | 2025-05-17 01:16:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:27.650918 | orchestrator | 2025-05-17 01:16:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:27.651041 | orchestrator | 2025-05-17 01:16:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:30.699463 | orchestrator | 2025-05-17 01:16:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:30.699580 | orchestrator | 2025-05-17 01:16:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:33.751936 | orchestrator | 2025-05-17 01:16:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:33.752069 | orchestrator | 2025-05-17 01:16:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:36.790431 | orchestrator | 2025-05-17 01:16:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:36.790544 | orchestrator | 2025-05-17 01:16:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:39.835389 | orchestrator | 2025-05-17 01:16:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:39.835494 | orchestrator | 2025-05-17 01:16:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:42.881220 | orchestrator | 2025-05-17 01:16:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:42.881317 | orchestrator | 2025-05-17 01:16:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:45.924762 | orchestrator | 2025-05-17 01:16:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:45.924891 | orchestrator | 2025-05-17 01:16:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:48.967098 | orchestrator | 2025-05-17 01:16:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:48.967210 | orchestrator | 2025-05-17 01:16:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:52.015163 | orchestrator | 2025-05-17 01:16:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:52.015293 | orchestrator | 2025-05-17 01:16:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:55.059116 | orchestrator | 2025-05-17 01:16:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:55.059202 | orchestrator | 2025-05-17 01:16:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:16:58.109368 | orchestrator | 2025-05-17 01:16:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:16:58.109474 | orchestrator | 2025-05-17 01:16:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:01.153175 | orchestrator | 2025-05-17 01:17:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:01.153303 | orchestrator | 2025-05-17 01:17:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:04.196825 | orchestrator | 2025-05-17 01:17:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:04.196936 | orchestrator | 2025-05-17 01:17:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:07.253864 | orchestrator | 2025-05-17 01:17:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:07.254014 | orchestrator | 2025-05-17 01:17:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:10.296436 | orchestrator | 2025-05-17 01:17:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:10.296535 | orchestrator | 2025-05-17 01:17:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:13.348382 | orchestrator | 2025-05-17 01:17:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:13.348479 | orchestrator | 2025-05-17 01:17:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:16.393055 | orchestrator | 2025-05-17 01:17:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:16.393164 | orchestrator | 2025-05-17 01:17:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:19.444765 | orchestrator | 2025-05-17 01:17:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:19.444876 | orchestrator | 2025-05-17 01:17:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:22.490378 | orchestrator | 2025-05-17 01:17:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:22.490488 | orchestrator | 2025-05-17 01:17:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:25.537925 | orchestrator | 2025-05-17 01:17:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:25.538092 | orchestrator | 2025-05-17 01:17:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:28.586200 | orchestrator | 2025-05-17 01:17:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:28.586308 | orchestrator | 2025-05-17 01:17:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:31.640311 | orchestrator | 2025-05-17 01:17:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:31.640428 | orchestrator | 2025-05-17 01:17:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:34.688373 | orchestrator | 2025-05-17 01:17:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:34.688469 | orchestrator | 2025-05-17 01:17:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:37.737053 | orchestrator | 2025-05-17 01:17:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:37.737194 | orchestrator | 2025-05-17 01:17:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:40.783834 | orchestrator | 2025-05-17 01:17:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:40.783942 | orchestrator | 2025-05-17 01:17:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:43.828870 | orchestrator | 2025-05-17 01:17:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:43.828984 | orchestrator | 2025-05-17 01:17:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:46.880851 | orchestrator | 2025-05-17 01:17:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:46.880947 | orchestrator | 2025-05-17 01:17:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:49.932947 | orchestrator | 2025-05-17 01:17:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:49.933059 | orchestrator | 2025-05-17 01:17:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:52.983486 | orchestrator | 2025-05-17 01:17:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:52.983593 | orchestrator | 2025-05-17 01:17:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:56.038952 | orchestrator | 2025-05-17 01:17:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:56.039064 | orchestrator | 2025-05-17 01:17:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:17:59.083445 | orchestrator | 2025-05-17 01:17:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:17:59.083548 | orchestrator | 2025-05-17 01:17:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:02.131380 | orchestrator | 2025-05-17 01:18:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:02.131496 | orchestrator | 2025-05-17 01:18:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:05.182461 | orchestrator | 2025-05-17 01:18:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:05.182618 | orchestrator | 2025-05-17 01:18:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:08.225519 | orchestrator | 2025-05-17 01:18:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:08.225631 | orchestrator | 2025-05-17 01:18:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:11.274425 | orchestrator | 2025-05-17 01:18:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:11.274528 | orchestrator | 2025-05-17 01:18:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:14.324341 | orchestrator | 2025-05-17 01:18:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:14.324448 | orchestrator | 2025-05-17 01:18:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:17.373469 | orchestrator | 2025-05-17 01:18:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:17.373542 | orchestrator | 2025-05-17 01:18:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:20.420436 | orchestrator | 2025-05-17 01:18:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:20.420715 | orchestrator | 2025-05-17 01:18:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:23.473032 | orchestrator | 2025-05-17 01:18:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:23.473143 | orchestrator | 2025-05-17 01:18:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:26.526410 | orchestrator | 2025-05-17 01:18:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:26.526521 | orchestrator | 2025-05-17 01:18:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:29.571223 | orchestrator | 2025-05-17 01:18:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:29.571334 | orchestrator | 2025-05-17 01:18:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:32.624631 | orchestrator | 2025-05-17 01:18:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:32.624807 | orchestrator | 2025-05-17 01:18:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:35.672087 | orchestrator | 2025-05-17 01:18:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:35.672193 | orchestrator | 2025-05-17 01:18:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:38.723041 | orchestrator | 2025-05-17 01:18:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:38.723153 | orchestrator | 2025-05-17 01:18:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:41.776642 | orchestrator | 2025-05-17 01:18:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:41.777175 | orchestrator | 2025-05-17 01:18:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:44.823292 | orchestrator | 2025-05-17 01:18:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:44.823375 | orchestrator | 2025-05-17 01:18:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:47.872579 | orchestrator | 2025-05-17 01:18:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:47.872780 | orchestrator | 2025-05-17 01:18:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:50.929404 | orchestrator | 2025-05-17 01:18:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:50.929512 | orchestrator | 2025-05-17 01:18:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:53.981872 | orchestrator | 2025-05-17 01:18:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:53.981983 | orchestrator | 2025-05-17 01:18:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:18:57.035423 | orchestrator | 2025-05-17 01:18:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:18:57.035537 | orchestrator | 2025-05-17 01:18:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:00.084098 | orchestrator | 2025-05-17 01:19:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:00.084171 | orchestrator | 2025-05-17 01:19:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:03.128134 | orchestrator | 2025-05-17 01:19:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:03.128300 | orchestrator | 2025-05-17 01:19:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:06.169103 | orchestrator | 2025-05-17 01:19:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:06.169223 | orchestrator | 2025-05-17 01:19:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:09.216157 | orchestrator | 2025-05-17 01:19:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:09.216271 | orchestrator | 2025-05-17 01:19:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:12.269900 | orchestrator | 2025-05-17 01:19:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:12.270095 | orchestrator | 2025-05-17 01:19:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:15.307509 | orchestrator | 2025-05-17 01:19:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:15.307877 | orchestrator | 2025-05-17 01:19:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:18.357025 | orchestrator | 2025-05-17 01:19:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:18.357127 | orchestrator | 2025-05-17 01:19:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:21.402859 | orchestrator | 2025-05-17 01:19:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:21.402973 | orchestrator | 2025-05-17 01:19:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:24.450521 | orchestrator | 2025-05-17 01:19:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:24.450615 | orchestrator | 2025-05-17 01:19:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:27.500098 | orchestrator | 2025-05-17 01:19:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:27.500188 | orchestrator | 2025-05-17 01:19:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:30.542889 | orchestrator | 2025-05-17 01:19:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:30.543003 | orchestrator | 2025-05-17 01:19:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:33.594415 | orchestrator | 2025-05-17 01:19:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:33.594527 | orchestrator | 2025-05-17 01:19:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:36.639256 | orchestrator | 2025-05-17 01:19:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:36.640330 | orchestrator | 2025-05-17 01:19:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:39.688907 | orchestrator | 2025-05-17 01:19:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:39.689040 | orchestrator | 2025-05-17 01:19:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:42.739984 | orchestrator | 2025-05-17 01:19:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:42.740097 | orchestrator | 2025-05-17 01:19:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:45.787906 | orchestrator | 2025-05-17 01:19:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:45.788019 | orchestrator | 2025-05-17 01:19:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:48.835750 | orchestrator | 2025-05-17 01:19:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:48.835863 | orchestrator | 2025-05-17 01:19:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:51.886209 | orchestrator | 2025-05-17 01:19:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:51.886327 | orchestrator | 2025-05-17 01:19:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:54.927551 | orchestrator | 2025-05-17 01:19:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:54.927669 | orchestrator | 2025-05-17 01:19:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:19:57.980814 | orchestrator | 2025-05-17 01:19:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:19:57.980926 | orchestrator | 2025-05-17 01:19:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:01.031131 | orchestrator | 2025-05-17 01:20:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:01.031252 | orchestrator | 2025-05-17 01:20:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:04.073272 | orchestrator | 2025-05-17 01:20:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:04.073370 | orchestrator | 2025-05-17 01:20:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:07.122852 | orchestrator | 2025-05-17 01:20:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:07.123005 | orchestrator | 2025-05-17 01:20:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:10.177439 | orchestrator | 2025-05-17 01:20:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:10.177550 | orchestrator | 2025-05-17 01:20:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:13.218412 | orchestrator | 2025-05-17 01:20:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:13.218546 | orchestrator | 2025-05-17 01:20:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:16.266232 | orchestrator | 2025-05-17 01:20:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:16.266309 | orchestrator | 2025-05-17 01:20:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:19.309433 | orchestrator | 2025-05-17 01:20:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:19.309525 | orchestrator | 2025-05-17 01:20:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:22.356331 | orchestrator | 2025-05-17 01:20:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:22.356972 | orchestrator | 2025-05-17 01:20:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:25.401896 | orchestrator | 2025-05-17 01:20:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:25.402008 | orchestrator | 2025-05-17 01:20:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:28.448642 | orchestrator | 2025-05-17 01:20:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:28.448847 | orchestrator | 2025-05-17 01:20:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:31.503900 | orchestrator | 2025-05-17 01:20:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:31.504011 | orchestrator | 2025-05-17 01:20:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:34.551521 | orchestrator | 2025-05-17 01:20:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:34.551619 | orchestrator | 2025-05-17 01:20:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:37.593084 | orchestrator | 2025-05-17 01:20:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:37.593960 | orchestrator | 2025-05-17 01:20:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:40.639174 | orchestrator | 2025-05-17 01:20:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:40.639278 | orchestrator | 2025-05-17 01:20:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:43.687053 | orchestrator | 2025-05-17 01:20:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:43.687166 | orchestrator | 2025-05-17 01:20:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:46.736466 | orchestrator | 2025-05-17 01:20:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:46.736601 | orchestrator | 2025-05-17 01:20:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:49.785878 | orchestrator | 2025-05-17 01:20:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:49.785971 | orchestrator | 2025-05-17 01:20:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:52.839298 | orchestrator | 2025-05-17 01:20:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:52.839398 | orchestrator | 2025-05-17 01:20:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:55.888936 | orchestrator | 2025-05-17 01:20:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:55.889052 | orchestrator | 2025-05-17 01:20:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:20:58.938877 | orchestrator | 2025-05-17 01:20:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:20:58.939006 | orchestrator | 2025-05-17 01:20:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:01.985988 | orchestrator | 2025-05-17 01:21:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:01.986141 | orchestrator | 2025-05-17 01:21:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:05.039044 | orchestrator | 2025-05-17 01:21:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:05.039180 | orchestrator | 2025-05-17 01:21:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:08.079692 | orchestrator | 2025-05-17 01:21:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:08.079870 | orchestrator | 2025-05-17 01:21:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:11.129676 | orchestrator | 2025-05-17 01:21:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:11.129845 | orchestrator | 2025-05-17 01:21:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:14.189091 | orchestrator | 2025-05-17 01:21:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:14.189205 | orchestrator | 2025-05-17 01:21:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:17.240502 | orchestrator | 2025-05-17 01:21:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:17.240612 | orchestrator | 2025-05-17 01:21:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:20.289436 | orchestrator | 2025-05-17 01:21:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:20.289548 | orchestrator | 2025-05-17 01:21:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:23.333964 | orchestrator | 2025-05-17 01:21:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:23.334179 | orchestrator | 2025-05-17 01:21:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:26.381570 | orchestrator | 2025-05-17 01:21:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:26.381684 | orchestrator | 2025-05-17 01:21:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:29.431742 | orchestrator | 2025-05-17 01:21:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:29.431911 | orchestrator | 2025-05-17 01:21:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:32.475694 | orchestrator | 2025-05-17 01:21:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:32.475847 | orchestrator | 2025-05-17 01:21:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:35.524385 | orchestrator | 2025-05-17 01:21:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:35.524490 | orchestrator | 2025-05-17 01:21:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:38.572154 | orchestrator | 2025-05-17 01:21:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:38.572285 | orchestrator | 2025-05-17 01:21:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:41.625214 | orchestrator | 2025-05-17 01:21:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:41.625342 | orchestrator | 2025-05-17 01:21:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:44.675463 | orchestrator | 2025-05-17 01:21:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:44.675589 | orchestrator | 2025-05-17 01:21:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:47.718428 | orchestrator | 2025-05-17 01:21:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:47.718564 | orchestrator | 2025-05-17 01:21:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:50.765848 | orchestrator | 2025-05-17 01:21:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:50.765970 | orchestrator | 2025-05-17 01:21:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:53.816321 | orchestrator | 2025-05-17 01:21:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:53.816431 | orchestrator | 2025-05-17 01:21:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:56.867058 | orchestrator | 2025-05-17 01:21:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:56.867193 | orchestrator | 2025-05-17 01:21:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:21:59.914530 | orchestrator | 2025-05-17 01:21:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:21:59.914627 | orchestrator | 2025-05-17 01:21:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:02.971082 | orchestrator | 2025-05-17 01:22:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:02.971196 | orchestrator | 2025-05-17 01:22:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:06.016939 | orchestrator | 2025-05-17 01:22:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:06.017049 | orchestrator | 2025-05-17 01:22:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:09.066230 | orchestrator | 2025-05-17 01:22:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:09.066355 | orchestrator | 2025-05-17 01:22:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:12.113237 | orchestrator | 2025-05-17 01:22:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:12.113340 | orchestrator | 2025-05-17 01:22:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:15.166940 | orchestrator | 2025-05-17 01:22:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:15.167053 | orchestrator | 2025-05-17 01:22:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:18.215632 | orchestrator | 2025-05-17 01:22:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:18.215837 | orchestrator | 2025-05-17 01:22:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:21.264703 | orchestrator | 2025-05-17 01:22:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:21.264861 | orchestrator | 2025-05-17 01:22:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:24.312771 | orchestrator | 2025-05-17 01:22:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:24.312933 | orchestrator | 2025-05-17 01:22:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:27.366887 | orchestrator | 2025-05-17 01:22:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:27.366976 | orchestrator | 2025-05-17 01:22:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:30.413009 | orchestrator | 2025-05-17 01:22:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:30.413109 | orchestrator | 2025-05-17 01:22:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:33.456600 | orchestrator | 2025-05-17 01:22:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:33.456719 | orchestrator | 2025-05-17 01:22:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:36.507930 | orchestrator | 2025-05-17 01:22:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:36.508068 | orchestrator | 2025-05-17 01:22:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:39.552112 | orchestrator | 2025-05-17 01:22:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:39.552228 | orchestrator | 2025-05-17 01:22:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:42.599289 | orchestrator | 2025-05-17 01:22:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:42.599396 | orchestrator | 2025-05-17 01:22:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:45.642468 | orchestrator | 2025-05-17 01:22:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:45.642605 | orchestrator | 2025-05-17 01:22:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:48.688167 | orchestrator | 2025-05-17 01:22:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:48.688274 | orchestrator | 2025-05-17 01:22:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:51.734306 | orchestrator | 2025-05-17 01:22:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:51.734413 | orchestrator | 2025-05-17 01:22:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:54.782907 | orchestrator | 2025-05-17 01:22:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:54.783041 | orchestrator | 2025-05-17 01:22:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:22:57.828478 | orchestrator | 2025-05-17 01:22:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:22:57.828597 | orchestrator | 2025-05-17 01:22:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:00.874187 | orchestrator | 2025-05-17 01:23:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:00.874311 | orchestrator | 2025-05-17 01:23:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:03.922340 | orchestrator | 2025-05-17 01:23:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:03.922449 | orchestrator | 2025-05-17 01:23:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:06.975201 | orchestrator | 2025-05-17 01:23:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:06.975301 | orchestrator | 2025-05-17 01:23:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:10.032938 | orchestrator | 2025-05-17 01:23:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:10.033022 | orchestrator | 2025-05-17 01:23:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:13.082395 | orchestrator | 2025-05-17 01:23:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:13.082500 | orchestrator | 2025-05-17 01:23:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:16.135036 | orchestrator | 2025-05-17 01:23:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:16.135113 | orchestrator | 2025-05-17 01:23:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:19.182662 | orchestrator | 2025-05-17 01:23:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:19.182768 | orchestrator | 2025-05-17 01:23:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:22.237652 | orchestrator | 2025-05-17 01:23:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:22.237781 | orchestrator | 2025-05-17 01:23:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:25.287261 | orchestrator | 2025-05-17 01:23:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:25.287366 | orchestrator | 2025-05-17 01:23:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:28.333505 | orchestrator | 2025-05-17 01:23:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:28.333637 | orchestrator | 2025-05-17 01:23:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:31.385626 | orchestrator | 2025-05-17 01:23:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:31.385755 | orchestrator | 2025-05-17 01:23:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:34.436480 | orchestrator | 2025-05-17 01:23:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:34.436582 | orchestrator | 2025-05-17 01:23:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:37.484093 | orchestrator | 2025-05-17 01:23:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:37.484195 | orchestrator | 2025-05-17 01:23:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:40.538973 | orchestrator | 2025-05-17 01:23:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:40.539134 | orchestrator | 2025-05-17 01:23:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:43.591934 | orchestrator | 2025-05-17 01:23:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:43.592038 | orchestrator | 2025-05-17 01:23:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:46.643474 | orchestrator | 2025-05-17 01:23:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:46.643589 | orchestrator | 2025-05-17 01:23:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:49.694322 | orchestrator | 2025-05-17 01:23:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:49.694430 | orchestrator | 2025-05-17 01:23:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:52.737731 | orchestrator | 2025-05-17 01:23:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:52.737891 | orchestrator | 2025-05-17 01:23:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:55.783641 | orchestrator | 2025-05-17 01:23:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:55.783724 | orchestrator | 2025-05-17 01:23:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:23:58.830265 | orchestrator | 2025-05-17 01:23:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:23:58.830371 | orchestrator | 2025-05-17 01:23:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:01.881669 | orchestrator | 2025-05-17 01:24:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:01.881782 | orchestrator | 2025-05-17 01:24:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:04.929979 | orchestrator | 2025-05-17 01:24:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:04.930139 | orchestrator | 2025-05-17 01:24:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:07.980762 | orchestrator | 2025-05-17 01:24:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:07.980996 | orchestrator | 2025-05-17 01:24:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:11.025795 | orchestrator | 2025-05-17 01:24:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:11.025979 | orchestrator | 2025-05-17 01:24:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:14.073310 | orchestrator | 2025-05-17 01:24:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:14.073413 | orchestrator | 2025-05-17 01:24:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:17.122286 | orchestrator | 2025-05-17 01:24:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:17.122399 | orchestrator | 2025-05-17 01:24:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:20.177419 | orchestrator | 2025-05-17 01:24:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:20.177528 | orchestrator | 2025-05-17 01:24:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:23.221886 | orchestrator | 2025-05-17 01:24:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:23.222002 | orchestrator | 2025-05-17 01:24:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:26.276554 | orchestrator | 2025-05-17 01:24:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:26.276657 | orchestrator | 2025-05-17 01:24:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:29.323464 | orchestrator | 2025-05-17 01:24:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:29.323578 | orchestrator | 2025-05-17 01:24:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:32.369225 | orchestrator | 2025-05-17 01:24:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:32.369341 | orchestrator | 2025-05-17 01:24:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:35.412372 | orchestrator | 2025-05-17 01:24:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:35.412512 | orchestrator | 2025-05-17 01:24:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:38.458725 | orchestrator | 2025-05-17 01:24:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:38.458892 | orchestrator | 2025-05-17 01:24:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:41.508731 | orchestrator | 2025-05-17 01:24:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:41.508905 | orchestrator | 2025-05-17 01:24:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:44.560916 | orchestrator | 2025-05-17 01:24:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:44.561024 | orchestrator | 2025-05-17 01:24:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:47.607359 | orchestrator | 2025-05-17 01:24:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:47.607472 | orchestrator | 2025-05-17 01:24:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:50.657856 | orchestrator | 2025-05-17 01:24:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:50.657988 | orchestrator | 2025-05-17 01:24:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:53.711569 | orchestrator | 2025-05-17 01:24:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:53.712485 | orchestrator | 2025-05-17 01:24:53 | INFO  | Task 501113eb-9383-4fef-8ed4-93ac5c945540 is in state STARTED 2025-05-17 01:24:53.712528 | orchestrator | 2025-05-17 01:24:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:56.773125 | orchestrator | 2025-05-17 01:24:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:56.774014 | orchestrator | 2025-05-17 01:24:56 | INFO  | Task 501113eb-9383-4fef-8ed4-93ac5c945540 is in state STARTED 2025-05-17 01:24:56.774098 | orchestrator | 2025-05-17 01:24:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:24:59.830912 | orchestrator | 2025-05-17 01:24:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:24:59.832215 | orchestrator | 2025-05-17 01:24:59 | INFO  | Task 501113eb-9383-4fef-8ed4-93ac5c945540 is in state STARTED 2025-05-17 01:24:59.832317 | orchestrator | 2025-05-17 01:24:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:02.885253 | orchestrator | 2025-05-17 01:25:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:02.886100 | orchestrator | 2025-05-17 01:25:02 | INFO  | Task 501113eb-9383-4fef-8ed4-93ac5c945540 is in state SUCCESS 2025-05-17 01:25:02.886122 | orchestrator | 2025-05-17 01:25:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:05.933552 | orchestrator | 2025-05-17 01:25:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:05.933665 | orchestrator | 2025-05-17 01:25:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:08.983026 | orchestrator | 2025-05-17 01:25:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:08.983140 | orchestrator | 2025-05-17 01:25:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:12.033084 | orchestrator | 2025-05-17 01:25:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:12.033224 | orchestrator | 2025-05-17 01:25:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:15.071784 | orchestrator | 2025-05-17 01:25:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:15.071941 | orchestrator | 2025-05-17 01:25:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:18.114709 | orchestrator | 2025-05-17 01:25:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:18.114888 | orchestrator | 2025-05-17 01:25:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:21.162399 | orchestrator | 2025-05-17 01:25:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:21.162513 | orchestrator | 2025-05-17 01:25:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:24.215711 | orchestrator | 2025-05-17 01:25:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:24.215919 | orchestrator | 2025-05-17 01:25:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:27.271681 | orchestrator | 2025-05-17 01:25:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:27.271790 | orchestrator | 2025-05-17 01:25:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:30.316896 | orchestrator | 2025-05-17 01:25:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:30.317005 | orchestrator | 2025-05-17 01:25:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:33.362760 | orchestrator | 2025-05-17 01:25:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:33.362982 | orchestrator | 2025-05-17 01:25:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:36.411828 | orchestrator | 2025-05-17 01:25:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:36.411964 | orchestrator | 2025-05-17 01:25:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:39.460476 | orchestrator | 2025-05-17 01:25:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:39.460589 | orchestrator | 2025-05-17 01:25:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:42.503239 | orchestrator | 2025-05-17 01:25:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:42.503419 | orchestrator | 2025-05-17 01:25:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:45.558255 | orchestrator | 2025-05-17 01:25:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:45.558364 | orchestrator | 2025-05-17 01:25:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:48.610179 | orchestrator | 2025-05-17 01:25:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:48.610283 | orchestrator | 2025-05-17 01:25:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:51.657010 | orchestrator | 2025-05-17 01:25:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:51.745960 | orchestrator | 2025-05-17 01:25:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:54.697524 | orchestrator | 2025-05-17 01:25:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:54.697633 | orchestrator | 2025-05-17 01:25:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:25:57.746402 | orchestrator | 2025-05-17 01:25:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:25:57.746539 | orchestrator | 2025-05-17 01:25:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:00.792820 | orchestrator | 2025-05-17 01:26:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:00.792993 | orchestrator | 2025-05-17 01:26:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:03.842323 | orchestrator | 2025-05-17 01:26:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:03.842438 | orchestrator | 2025-05-17 01:26:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:06.893167 | orchestrator | 2025-05-17 01:26:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:06.893275 | orchestrator | 2025-05-17 01:26:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:09.938821 | orchestrator | 2025-05-17 01:26:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:09.938915 | orchestrator | 2025-05-17 01:26:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:12.990959 | orchestrator | 2025-05-17 01:26:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:12.991071 | orchestrator | 2025-05-17 01:26:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:16.041246 | orchestrator | 2025-05-17 01:26:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:16.041343 | orchestrator | 2025-05-17 01:26:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:19.079372 | orchestrator | 2025-05-17 01:26:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:19.079549 | orchestrator | 2025-05-17 01:26:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:22.123642 | orchestrator | 2025-05-17 01:26:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:22.123807 | orchestrator | 2025-05-17 01:26:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:25.165499 | orchestrator | 2025-05-17 01:26:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:25.165605 | orchestrator | 2025-05-17 01:26:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:28.222642 | orchestrator | 2025-05-17 01:26:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:28.222800 | orchestrator | 2025-05-17 01:26:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:31.268216 | orchestrator | 2025-05-17 01:26:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:31.268320 | orchestrator | 2025-05-17 01:26:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:34.316429 | orchestrator | 2025-05-17 01:26:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:34.316546 | orchestrator | 2025-05-17 01:26:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:37.358666 | orchestrator | 2025-05-17 01:26:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:37.358804 | orchestrator | 2025-05-17 01:26:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:40.407654 | orchestrator | 2025-05-17 01:26:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:40.407795 | orchestrator | 2025-05-17 01:26:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:43.457405 | orchestrator | 2025-05-17 01:26:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:43.457563 | orchestrator | 2025-05-17 01:26:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:46.515881 | orchestrator | 2025-05-17 01:26:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:46.517234 | orchestrator | 2025-05-17 01:26:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:49.568247 | orchestrator | 2025-05-17 01:26:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:49.568373 | orchestrator | 2025-05-17 01:26:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:52.607196 | orchestrator | 2025-05-17 01:26:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:52.607300 | orchestrator | 2025-05-17 01:26:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:55.656183 | orchestrator | 2025-05-17 01:26:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:55.656281 | orchestrator | 2025-05-17 01:26:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:26:58.703886 | orchestrator | 2025-05-17 01:26:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:26:58.703997 | orchestrator | 2025-05-17 01:26:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:01.750623 | orchestrator | 2025-05-17 01:27:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:01.750787 | orchestrator | 2025-05-17 01:27:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:04.795207 | orchestrator | 2025-05-17 01:27:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:04.795376 | orchestrator | 2025-05-17 01:27:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:07.848627 | orchestrator | 2025-05-17 01:27:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:07.848794 | orchestrator | 2025-05-17 01:27:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:10.896986 | orchestrator | 2025-05-17 01:27:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:10.897096 | orchestrator | 2025-05-17 01:27:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:13.942162 | orchestrator | 2025-05-17 01:27:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:13.942270 | orchestrator | 2025-05-17 01:27:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:16.996583 | orchestrator | 2025-05-17 01:27:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:16.996699 | orchestrator | 2025-05-17 01:27:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:20.047846 | orchestrator | 2025-05-17 01:27:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:20.047949 | orchestrator | 2025-05-17 01:27:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:23.098926 | orchestrator | 2025-05-17 01:27:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:23.099036 | orchestrator | 2025-05-17 01:27:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:26.147377 | orchestrator | 2025-05-17 01:27:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:26.147479 | orchestrator | 2025-05-17 01:27:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:29.196854 | orchestrator | 2025-05-17 01:27:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:29.196985 | orchestrator | 2025-05-17 01:27:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:32.251081 | orchestrator | 2025-05-17 01:27:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:32.251168 | orchestrator | 2025-05-17 01:27:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:35.302872 | orchestrator | 2025-05-17 01:27:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:35.302998 | orchestrator | 2025-05-17 01:27:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:38.349562 | orchestrator | 2025-05-17 01:27:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:38.349663 | orchestrator | 2025-05-17 01:27:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:41.397362 | orchestrator | 2025-05-17 01:27:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:41.397478 | orchestrator | 2025-05-17 01:27:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:44.441648 | orchestrator | 2025-05-17 01:27:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:44.441820 | orchestrator | 2025-05-17 01:27:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:47.487648 | orchestrator | 2025-05-17 01:27:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:47.487849 | orchestrator | 2025-05-17 01:27:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:50.539589 | orchestrator | 2025-05-17 01:27:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:50.539666 | orchestrator | 2025-05-17 01:27:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:53.583781 | orchestrator | 2025-05-17 01:27:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:53.583891 | orchestrator | 2025-05-17 01:27:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:56.630907 | orchestrator | 2025-05-17 01:27:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:56.631009 | orchestrator | 2025-05-17 01:27:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:27:59.678501 | orchestrator | 2025-05-17 01:27:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:27:59.678657 | orchestrator | 2025-05-17 01:27:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:02.728538 | orchestrator | 2025-05-17 01:28:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:02.728651 | orchestrator | 2025-05-17 01:28:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:05.780594 | orchestrator | 2025-05-17 01:28:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:05.780817 | orchestrator | 2025-05-17 01:28:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:08.828990 | orchestrator | 2025-05-17 01:28:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:08.829114 | orchestrator | 2025-05-17 01:28:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:11.879289 | orchestrator | 2025-05-17 01:28:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:11.879403 | orchestrator | 2025-05-17 01:28:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:14.935342 | orchestrator | 2025-05-17 01:28:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:14.935451 | orchestrator | 2025-05-17 01:28:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:17.985111 | orchestrator | 2025-05-17 01:28:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:17.985225 | orchestrator | 2025-05-17 01:28:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:21.034237 | orchestrator | 2025-05-17 01:28:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:21.034314 | orchestrator | 2025-05-17 01:28:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:24.071563 | orchestrator | 2025-05-17 01:28:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:24.071706 | orchestrator | 2025-05-17 01:28:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:27.121049 | orchestrator | 2025-05-17 01:28:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:27.121158 | orchestrator | 2025-05-17 01:28:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:30.169226 | orchestrator | 2025-05-17 01:28:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:30.169336 | orchestrator | 2025-05-17 01:28:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:33.219332 | orchestrator | 2025-05-17 01:28:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:33.219424 | orchestrator | 2025-05-17 01:28:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:36.264164 | orchestrator | 2025-05-17 01:28:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:36.264262 | orchestrator | 2025-05-17 01:28:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:39.312232 | orchestrator | 2025-05-17 01:28:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:39.312342 | orchestrator | 2025-05-17 01:28:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:42.359240 | orchestrator | 2025-05-17 01:28:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:42.359378 | orchestrator | 2025-05-17 01:28:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:45.400104 | orchestrator | 2025-05-17 01:28:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:45.400212 | orchestrator | 2025-05-17 01:28:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:48.448728 | orchestrator | 2025-05-17 01:28:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:48.448843 | orchestrator | 2025-05-17 01:28:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:51.489528 | orchestrator | 2025-05-17 01:28:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:51.489670 | orchestrator | 2025-05-17 01:28:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:54.528711 | orchestrator | 2025-05-17 01:28:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:54.528824 | orchestrator | 2025-05-17 01:28:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:28:57.571405 | orchestrator | 2025-05-17 01:28:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:28:57.571513 | orchestrator | 2025-05-17 01:28:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:00.619564 | orchestrator | 2025-05-17 01:29:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:00.619718 | orchestrator | 2025-05-17 01:29:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:03.661311 | orchestrator | 2025-05-17 01:29:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:03.661444 | orchestrator | 2025-05-17 01:29:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:06.713787 | orchestrator | 2025-05-17 01:29:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:06.713889 | orchestrator | 2025-05-17 01:29:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:09.761428 | orchestrator | 2025-05-17 01:29:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:09.761536 | orchestrator | 2025-05-17 01:29:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:12.814340 | orchestrator | 2025-05-17 01:29:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:12.814465 | orchestrator | 2025-05-17 01:29:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:15.861891 | orchestrator | 2025-05-17 01:29:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:15.862001 | orchestrator | 2025-05-17 01:29:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:18.906302 | orchestrator | 2025-05-17 01:29:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:18.906424 | orchestrator | 2025-05-17 01:29:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:21.954011 | orchestrator | 2025-05-17 01:29:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:21.954186 | orchestrator | 2025-05-17 01:29:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:25.000469 | orchestrator | 2025-05-17 01:29:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:25.000553 | orchestrator | 2025-05-17 01:29:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:28.051285 | orchestrator | 2025-05-17 01:29:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:28.051362 | orchestrator | 2025-05-17 01:29:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:31.103849 | orchestrator | 2025-05-17 01:29:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:31.103973 | orchestrator | 2025-05-17 01:29:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:34.147967 | orchestrator | 2025-05-17 01:29:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:34.148072 | orchestrator | 2025-05-17 01:29:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:37.201254 | orchestrator | 2025-05-17 01:29:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:37.201492 | orchestrator | 2025-05-17 01:29:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:40.249985 | orchestrator | 2025-05-17 01:29:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:40.250147 | orchestrator | 2025-05-17 01:29:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:43.294806 | orchestrator | 2025-05-17 01:29:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:43.294919 | orchestrator | 2025-05-17 01:29:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:46.343158 | orchestrator | 2025-05-17 01:29:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:46.343290 | orchestrator | 2025-05-17 01:29:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:49.391174 | orchestrator | 2025-05-17 01:29:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:49.391423 | orchestrator | 2025-05-17 01:29:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:52.437946 | orchestrator | 2025-05-17 01:29:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:52.438116 | orchestrator | 2025-05-17 01:29:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:55.484760 | orchestrator | 2025-05-17 01:29:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:55.484876 | orchestrator | 2025-05-17 01:29:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:29:58.535050 | orchestrator | 2025-05-17 01:29:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:29:58.535178 | orchestrator | 2025-05-17 01:29:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:01.580424 | orchestrator | 2025-05-17 01:30:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:01.580535 | orchestrator | 2025-05-17 01:30:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:04.619843 | orchestrator | 2025-05-17 01:30:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:04.619954 | orchestrator | 2025-05-17 01:30:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:07.673200 | orchestrator | 2025-05-17 01:30:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:07.673303 | orchestrator | 2025-05-17 01:30:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:10.725059 | orchestrator | 2025-05-17 01:30:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:10.725195 | orchestrator | 2025-05-17 01:30:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:13.776677 | orchestrator | 2025-05-17 01:30:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:13.776782 | orchestrator | 2025-05-17 01:30:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:16.826561 | orchestrator | 2025-05-17 01:30:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:16.826845 | orchestrator | 2025-05-17 01:30:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:19.875579 | orchestrator | 2025-05-17 01:30:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:19.875727 | orchestrator | 2025-05-17 01:30:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:22.923799 | orchestrator | 2025-05-17 01:30:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:22.923915 | orchestrator | 2025-05-17 01:30:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:25.972940 | orchestrator | 2025-05-17 01:30:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:25.973048 | orchestrator | 2025-05-17 01:30:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:29.019886 | orchestrator | 2025-05-17 01:30:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:29.020099 | orchestrator | 2025-05-17 01:30:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:32.072433 | orchestrator | 2025-05-17 01:30:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:32.072560 | orchestrator | 2025-05-17 01:30:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:35.122325 | orchestrator | 2025-05-17 01:30:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:35.122445 | orchestrator | 2025-05-17 01:30:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:38.168321 | orchestrator | 2025-05-17 01:30:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:38.168438 | orchestrator | 2025-05-17 01:30:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:41.215826 | orchestrator | 2025-05-17 01:30:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:41.215943 | orchestrator | 2025-05-17 01:30:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:44.264098 | orchestrator | 2025-05-17 01:30:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:44.264197 | orchestrator | 2025-05-17 01:30:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:47.306104 | orchestrator | 2025-05-17 01:30:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:47.306192 | orchestrator | 2025-05-17 01:30:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:50.351770 | orchestrator | 2025-05-17 01:30:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:50.351891 | orchestrator | 2025-05-17 01:30:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:53.404148 | orchestrator | 2025-05-17 01:30:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:53.404240 | orchestrator | 2025-05-17 01:30:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:56.449314 | orchestrator | 2025-05-17 01:30:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:56.449414 | orchestrator | 2025-05-17 01:30:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:30:59.497597 | orchestrator | 2025-05-17 01:30:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:30:59.497781 | orchestrator | 2025-05-17 01:30:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:02.554495 | orchestrator | 2025-05-17 01:31:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:02.554594 | orchestrator | 2025-05-17 01:31:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:05.604521 | orchestrator | 2025-05-17 01:31:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:05.604615 | orchestrator | 2025-05-17 01:31:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:08.655352 | orchestrator | 2025-05-17 01:31:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:08.655465 | orchestrator | 2025-05-17 01:31:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:11.699327 | orchestrator | 2025-05-17 01:31:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:11.700380 | orchestrator | 2025-05-17 01:31:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:14.746292 | orchestrator | 2025-05-17 01:31:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:14.746395 | orchestrator | 2025-05-17 01:31:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:17.794895 | orchestrator | 2025-05-17 01:31:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:17.795009 | orchestrator | 2025-05-17 01:31:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:20.844004 | orchestrator | 2025-05-17 01:31:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:20.844113 | orchestrator | 2025-05-17 01:31:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:23.893278 | orchestrator | 2025-05-17 01:31:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:23.893393 | orchestrator | 2025-05-17 01:31:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:26.940894 | orchestrator | 2025-05-17 01:31:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:26.941003 | orchestrator | 2025-05-17 01:31:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:29.984992 | orchestrator | 2025-05-17 01:31:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:29.985103 | orchestrator | 2025-05-17 01:31:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:33.035257 | orchestrator | 2025-05-17 01:31:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:33.035363 | orchestrator | 2025-05-17 01:31:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:36.085084 | orchestrator | 2025-05-17 01:31:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:36.085189 | orchestrator | 2025-05-17 01:31:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:39.134456 | orchestrator | 2025-05-17 01:31:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:39.134565 | orchestrator | 2025-05-17 01:31:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:42.186816 | orchestrator | 2025-05-17 01:31:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:42.186938 | orchestrator | 2025-05-17 01:31:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:45.232614 | orchestrator | 2025-05-17 01:31:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:45.232781 | orchestrator | 2025-05-17 01:31:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:48.282117 | orchestrator | 2025-05-17 01:31:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:48.282231 | orchestrator | 2025-05-17 01:31:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:51.320632 | orchestrator | 2025-05-17 01:31:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:51.320791 | orchestrator | 2025-05-17 01:31:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:54.371746 | orchestrator | 2025-05-17 01:31:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:54.371843 | orchestrator | 2025-05-17 01:31:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:31:57.412578 | orchestrator | 2025-05-17 01:31:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:31:57.412707 | orchestrator | 2025-05-17 01:31:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:00.456934 | orchestrator | 2025-05-17 01:32:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:00.457065 | orchestrator | 2025-05-17 01:32:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:03.503152 | orchestrator | 2025-05-17 01:32:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:03.503260 | orchestrator | 2025-05-17 01:32:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:06.545471 | orchestrator | 2025-05-17 01:32:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:06.545592 | orchestrator | 2025-05-17 01:32:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:09.598451 | orchestrator | 2025-05-17 01:32:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:09.598557 | orchestrator | 2025-05-17 01:32:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:12.650930 | orchestrator | 2025-05-17 01:32:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:12.651788 | orchestrator | 2025-05-17 01:32:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:15.702623 | orchestrator | 2025-05-17 01:32:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:15.702784 | orchestrator | 2025-05-17 01:32:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:18.758280 | orchestrator | 2025-05-17 01:32:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:18.758395 | orchestrator | 2025-05-17 01:32:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:21.807367 | orchestrator | 2025-05-17 01:32:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:21.807462 | orchestrator | 2025-05-17 01:32:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:24.862295 | orchestrator | 2025-05-17 01:32:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:24.862392 | orchestrator | 2025-05-17 01:32:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:27.909429 | orchestrator | 2025-05-17 01:32:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:27.909528 | orchestrator | 2025-05-17 01:32:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:30.954336 | orchestrator | 2025-05-17 01:32:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:30.954447 | orchestrator | 2025-05-17 01:32:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:34.011397 | orchestrator | 2025-05-17 01:32:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:34.011492 | orchestrator | 2025-05-17 01:32:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:37.059923 | orchestrator | 2025-05-17 01:32:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:37.060025 | orchestrator | 2025-05-17 01:32:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:40.102761 | orchestrator | 2025-05-17 01:32:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:40.102868 | orchestrator | 2025-05-17 01:32:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:43.148394 | orchestrator | 2025-05-17 01:32:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:43.148510 | orchestrator | 2025-05-17 01:32:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:46.201018 | orchestrator | 2025-05-17 01:32:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:46.201127 | orchestrator | 2025-05-17 01:32:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:49.257520 | orchestrator | 2025-05-17 01:32:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:49.257637 | orchestrator | 2025-05-17 01:32:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:52.305854 | orchestrator | 2025-05-17 01:32:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:52.306118 | orchestrator | 2025-05-17 01:32:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:55.353206 | orchestrator | 2025-05-17 01:32:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:55.353340 | orchestrator | 2025-05-17 01:32:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:32:58.397909 | orchestrator | 2025-05-17 01:32:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:32:58.398076 | orchestrator | 2025-05-17 01:32:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:01.440146 | orchestrator | 2025-05-17 01:33:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:01.440278 | orchestrator | 2025-05-17 01:33:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:04.488701 | orchestrator | 2025-05-17 01:33:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:04.488814 | orchestrator | 2025-05-17 01:33:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:07.538276 | orchestrator | 2025-05-17 01:33:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:07.538392 | orchestrator | 2025-05-17 01:33:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:10.586003 | orchestrator | 2025-05-17 01:33:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:10.586167 | orchestrator | 2025-05-17 01:33:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:13.634562 | orchestrator | 2025-05-17 01:33:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:13.634702 | orchestrator | 2025-05-17 01:33:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:16.684864 | orchestrator | 2025-05-17 01:33:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:16.684969 | orchestrator | 2025-05-17 01:33:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:19.734227 | orchestrator | 2025-05-17 01:33:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:19.734338 | orchestrator | 2025-05-17 01:33:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:22.784000 | orchestrator | 2025-05-17 01:33:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:22.784084 | orchestrator | 2025-05-17 01:33:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:25.836833 | orchestrator | 2025-05-17 01:33:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:25.836965 | orchestrator | 2025-05-17 01:33:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:28.883746 | orchestrator | 2025-05-17 01:33:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:28.883856 | orchestrator | 2025-05-17 01:33:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:31.933282 | orchestrator | 2025-05-17 01:33:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:31.933393 | orchestrator | 2025-05-17 01:33:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:34.982315 | orchestrator | 2025-05-17 01:33:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:34.982424 | orchestrator | 2025-05-17 01:33:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:38.027966 | orchestrator | 2025-05-17 01:33:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:38.028084 | orchestrator | 2025-05-17 01:33:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:41.071774 | orchestrator | 2025-05-17 01:33:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:41.071881 | orchestrator | 2025-05-17 01:33:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:44.112019 | orchestrator | 2025-05-17 01:33:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:44.112135 | orchestrator | 2025-05-17 01:33:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:47.157953 | orchestrator | 2025-05-17 01:33:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:47.158131 | orchestrator | 2025-05-17 01:33:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:50.206004 | orchestrator | 2025-05-17 01:33:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:50.206243 | orchestrator | 2025-05-17 01:33:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:53.259607 | orchestrator | 2025-05-17 01:33:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:53.259779 | orchestrator | 2025-05-17 01:33:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:56.309659 | orchestrator | 2025-05-17 01:33:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:56.309834 | orchestrator | 2025-05-17 01:33:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:33:59.355424 | orchestrator | 2025-05-17 01:33:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:33:59.355532 | orchestrator | 2025-05-17 01:33:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:02.401413 | orchestrator | 2025-05-17 01:34:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:02.401554 | orchestrator | 2025-05-17 01:34:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:05.446428 | orchestrator | 2025-05-17 01:34:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:05.446540 | orchestrator | 2025-05-17 01:34:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:08.495516 | orchestrator | 2025-05-17 01:34:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:08.495632 | orchestrator | 2025-05-17 01:34:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:11.543162 | orchestrator | 2025-05-17 01:34:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:11.543290 | orchestrator | 2025-05-17 01:34:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:14.589710 | orchestrator | 2025-05-17 01:34:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:14.589827 | orchestrator | 2025-05-17 01:34:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:17.640391 | orchestrator | 2025-05-17 01:34:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:17.640491 | orchestrator | 2025-05-17 01:34:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:20.698202 | orchestrator | 2025-05-17 01:34:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:20.698332 | orchestrator | 2025-05-17 01:34:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:23.741418 | orchestrator | 2025-05-17 01:34:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:23.741526 | orchestrator | 2025-05-17 01:34:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:26.790320 | orchestrator | 2025-05-17 01:34:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:26.790439 | orchestrator | 2025-05-17 01:34:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:29.833119 | orchestrator | 2025-05-17 01:34:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:29.833266 | orchestrator | 2025-05-17 01:34:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:32.885204 | orchestrator | 2025-05-17 01:34:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:32.885329 | orchestrator | 2025-05-17 01:34:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:35.935218 | orchestrator | 2025-05-17 01:34:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:35.935328 | orchestrator | 2025-05-17 01:34:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:38.982303 | orchestrator | 2025-05-17 01:34:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:38.982449 | orchestrator | 2025-05-17 01:34:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:42.033862 | orchestrator | 2025-05-17 01:34:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:42.033958 | orchestrator | 2025-05-17 01:34:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:45.080206 | orchestrator | 2025-05-17 01:34:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:45.080361 | orchestrator | 2025-05-17 01:34:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:48.135249 | orchestrator | 2025-05-17 01:34:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:48.135356 | orchestrator | 2025-05-17 01:34:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:51.176999 | orchestrator | 2025-05-17 01:34:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:51.177110 | orchestrator | 2025-05-17 01:34:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:54.232173 | orchestrator | 2025-05-17 01:34:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:54.233398 | orchestrator | 2025-05-17 01:34:54 | INFO  | Task cff7ce3b-acf3-4935-8819-670d802cd487 is in state STARTED 2025-05-17 01:34:54.233832 | orchestrator | 2025-05-17 01:34:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:34:57.293775 | orchestrator | 2025-05-17 01:34:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:34:57.294388 | orchestrator | 2025-05-17 01:34:57 | INFO  | Task cff7ce3b-acf3-4935-8819-670d802cd487 is in state STARTED 2025-05-17 01:34:57.294424 | orchestrator | 2025-05-17 01:34:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:00.351043 | orchestrator | 2025-05-17 01:35:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:00.352544 | orchestrator | 2025-05-17 01:35:00 | INFO  | Task cff7ce3b-acf3-4935-8819-670d802cd487 is in state STARTED 2025-05-17 01:35:00.352923 | orchestrator | 2025-05-17 01:35:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:03.407363 | orchestrator | 2025-05-17 01:35:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:03.408113 | orchestrator | 2025-05-17 01:35:03 | INFO  | Task cff7ce3b-acf3-4935-8819-670d802cd487 is in state SUCCESS 2025-05-17 01:35:03.408372 | orchestrator | 2025-05-17 01:35:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:06.458918 | orchestrator | 2025-05-17 01:35:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:06.459049 | orchestrator | 2025-05-17 01:35:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:09.505192 | orchestrator | 2025-05-17 01:35:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:09.505298 | orchestrator | 2025-05-17 01:35:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:12.559235 | orchestrator | 2025-05-17 01:35:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:12.559367 | orchestrator | 2025-05-17 01:35:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:15.606245 | orchestrator | 2025-05-17 01:35:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:15.606355 | orchestrator | 2025-05-17 01:35:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:18.654868 | orchestrator | 2025-05-17 01:35:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:18.655044 | orchestrator | 2025-05-17 01:35:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:21.696482 | orchestrator | 2025-05-17 01:35:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:21.696587 | orchestrator | 2025-05-17 01:35:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:24.748032 | orchestrator | 2025-05-17 01:35:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:24.748164 | orchestrator | 2025-05-17 01:35:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:27.796265 | orchestrator | 2025-05-17 01:35:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:27.796372 | orchestrator | 2025-05-17 01:35:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:30.843277 | orchestrator | 2025-05-17 01:35:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:30.843392 | orchestrator | 2025-05-17 01:35:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:33.894364 | orchestrator | 2025-05-17 01:35:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:33.894462 | orchestrator | 2025-05-17 01:35:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:36.942474 | orchestrator | 2025-05-17 01:35:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:36.942596 | orchestrator | 2025-05-17 01:35:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:39.984327 | orchestrator | 2025-05-17 01:35:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:39.984458 | orchestrator | 2025-05-17 01:35:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:43.038340 | orchestrator | 2025-05-17 01:35:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:43.038437 | orchestrator | 2025-05-17 01:35:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:46.083105 | orchestrator | 2025-05-17 01:35:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:46.083216 | orchestrator | 2025-05-17 01:35:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:49.131050 | orchestrator | 2025-05-17 01:35:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:49.131160 | orchestrator | 2025-05-17 01:35:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:52.179038 | orchestrator | 2025-05-17 01:35:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:52.179125 | orchestrator | 2025-05-17 01:35:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:55.225903 | orchestrator | 2025-05-17 01:35:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:55.226077 | orchestrator | 2025-05-17 01:35:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:35:58.280895 | orchestrator | 2025-05-17 01:35:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:35:58.335213 | orchestrator | 2025-05-17 01:35:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:01.330322 | orchestrator | 2025-05-17 01:36:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:01.330428 | orchestrator | 2025-05-17 01:36:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:04.376786 | orchestrator | 2025-05-17 01:36:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:04.376930 | orchestrator | 2025-05-17 01:36:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:07.423770 | orchestrator | 2025-05-17 01:36:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:07.423916 | orchestrator | 2025-05-17 01:36:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:10.473011 | orchestrator | 2025-05-17 01:36:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:10.473144 | orchestrator | 2025-05-17 01:36:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:13.522123 | orchestrator | 2025-05-17 01:36:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:13.522230 | orchestrator | 2025-05-17 01:36:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:16.559244 | orchestrator | 2025-05-17 01:36:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:16.559363 | orchestrator | 2025-05-17 01:36:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:19.608065 | orchestrator | 2025-05-17 01:36:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:19.608173 | orchestrator | 2025-05-17 01:36:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:22.660895 | orchestrator | 2025-05-17 01:36:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:22.661006 | orchestrator | 2025-05-17 01:36:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:25.711065 | orchestrator | 2025-05-17 01:36:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:25.711169 | orchestrator | 2025-05-17 01:36:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:28.755056 | orchestrator | 2025-05-17 01:36:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:28.755174 | orchestrator | 2025-05-17 01:36:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:31.802547 | orchestrator | 2025-05-17 01:36:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:31.802674 | orchestrator | 2025-05-17 01:36:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:34.854271 | orchestrator | 2025-05-17 01:36:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:34.854383 | orchestrator | 2025-05-17 01:36:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:37.902376 | orchestrator | 2025-05-17 01:36:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:37.902496 | orchestrator | 2025-05-17 01:36:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:40.944804 | orchestrator | 2025-05-17 01:36:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:40.944926 | orchestrator | 2025-05-17 01:36:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:43.993253 | orchestrator | 2025-05-17 01:36:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:43.993385 | orchestrator | 2025-05-17 01:36:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:47.039412 | orchestrator | 2025-05-17 01:36:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:47.039537 | orchestrator | 2025-05-17 01:36:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:50.090895 | orchestrator | 2025-05-17 01:36:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:50.091036 | orchestrator | 2025-05-17 01:36:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:53.137688 | orchestrator | 2025-05-17 01:36:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:53.137837 | orchestrator | 2025-05-17 01:36:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:56.181430 | orchestrator | 2025-05-17 01:36:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:56.181530 | orchestrator | 2025-05-17 01:36:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:36:59.231403 | orchestrator | 2025-05-17 01:36:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:36:59.231521 | orchestrator | 2025-05-17 01:36:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:02.271256 | orchestrator | 2025-05-17 01:37:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:02.271360 | orchestrator | 2025-05-17 01:37:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:05.316679 | orchestrator | 2025-05-17 01:37:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:05.316784 | orchestrator | 2025-05-17 01:37:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:08.362790 | orchestrator | 2025-05-17 01:37:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:08.362917 | orchestrator | 2025-05-17 01:37:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:11.407891 | orchestrator | 2025-05-17 01:37:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:11.408033 | orchestrator | 2025-05-17 01:37:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:14.458313 | orchestrator | 2025-05-17 01:37:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:14.458461 | orchestrator | 2025-05-17 01:37:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:17.500066 | orchestrator | 2025-05-17 01:37:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:17.500176 | orchestrator | 2025-05-17 01:37:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:20.540033 | orchestrator | 2025-05-17 01:37:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:20.540138 | orchestrator | 2025-05-17 01:37:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:23.592215 | orchestrator | 2025-05-17 01:37:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:23.592403 | orchestrator | 2025-05-17 01:37:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:26.650551 | orchestrator | 2025-05-17 01:37:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:26.650727 | orchestrator | 2025-05-17 01:37:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:29.706999 | orchestrator | 2025-05-17 01:37:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:29.707115 | orchestrator | 2025-05-17 01:37:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:32.765375 | orchestrator | 2025-05-17 01:37:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:32.765483 | orchestrator | 2025-05-17 01:37:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:35.821236 | orchestrator | 2025-05-17 01:37:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:35.821375 | orchestrator | 2025-05-17 01:37:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:38.866136 | orchestrator | 2025-05-17 01:37:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:38.866243 | orchestrator | 2025-05-17 01:37:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:41.912076 | orchestrator | 2025-05-17 01:37:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:41.912188 | orchestrator | 2025-05-17 01:37:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:44.960268 | orchestrator | 2025-05-17 01:37:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:44.960406 | orchestrator | 2025-05-17 01:37:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:48.014331 | orchestrator | 2025-05-17 01:37:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:48.014452 | orchestrator | 2025-05-17 01:37:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:51.056505 | orchestrator | 2025-05-17 01:37:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:51.056584 | orchestrator | 2025-05-17 01:37:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:54.101786 | orchestrator | 2025-05-17 01:37:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:54.101901 | orchestrator | 2025-05-17 01:37:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:37:57.151990 | orchestrator | 2025-05-17 01:37:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:37:57.152096 | orchestrator | 2025-05-17 01:37:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:00.194216 | orchestrator | 2025-05-17 01:38:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:00.194371 | orchestrator | 2025-05-17 01:38:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:03.243156 | orchestrator | 2025-05-17 01:38:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:03.243238 | orchestrator | 2025-05-17 01:38:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:06.290123 | orchestrator | 2025-05-17 01:38:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:06.290280 | orchestrator | 2025-05-17 01:38:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:09.337695 | orchestrator | 2025-05-17 01:38:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:09.337801 | orchestrator | 2025-05-17 01:38:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:12.388007 | orchestrator | 2025-05-17 01:38:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:12.388108 | orchestrator | 2025-05-17 01:38:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:15.438539 | orchestrator | 2025-05-17 01:38:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:15.438747 | orchestrator | 2025-05-17 01:38:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:18.489051 | orchestrator | 2025-05-17 01:38:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:18.489166 | orchestrator | 2025-05-17 01:38:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:21.537379 | orchestrator | 2025-05-17 01:38:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:21.537524 | orchestrator | 2025-05-17 01:38:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:24.585508 | orchestrator | 2025-05-17 01:38:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:24.585652 | orchestrator | 2025-05-17 01:38:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:27.627949 | orchestrator | 2025-05-17 01:38:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:27.628053 | orchestrator | 2025-05-17 01:38:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:30.673736 | orchestrator | 2025-05-17 01:38:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:30.673856 | orchestrator | 2025-05-17 01:38:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:33.723023 | orchestrator | 2025-05-17 01:38:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:33.723130 | orchestrator | 2025-05-17 01:38:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:36.771817 | orchestrator | 2025-05-17 01:38:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:36.771947 | orchestrator | 2025-05-17 01:38:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:39.816553 | orchestrator | 2025-05-17 01:38:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:39.816755 | orchestrator | 2025-05-17 01:38:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:42.860479 | orchestrator | 2025-05-17 01:38:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:42.860593 | orchestrator | 2025-05-17 01:38:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:45.912166 | orchestrator | 2025-05-17 01:38:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:45.912273 | orchestrator | 2025-05-17 01:38:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:48.978514 | orchestrator | 2025-05-17 01:38:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:48.978688 | orchestrator | 2025-05-17 01:38:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:52.031797 | orchestrator | 2025-05-17 01:38:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:52.031943 | orchestrator | 2025-05-17 01:38:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:55.080122 | orchestrator | 2025-05-17 01:38:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:55.080251 | orchestrator | 2025-05-17 01:38:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:38:58.137643 | orchestrator | 2025-05-17 01:38:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:38:58.137731 | orchestrator | 2025-05-17 01:38:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:01.190349 | orchestrator | 2025-05-17 01:39:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:01.190433 | orchestrator | 2025-05-17 01:39:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:04.236028 | orchestrator | 2025-05-17 01:39:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:04.236154 | orchestrator | 2025-05-17 01:39:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:07.281524 | orchestrator | 2025-05-17 01:39:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:07.281711 | orchestrator | 2025-05-17 01:39:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:10.332591 | orchestrator | 2025-05-17 01:39:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:10.332769 | orchestrator | 2025-05-17 01:39:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:13.377080 | orchestrator | 2025-05-17 01:39:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:13.377226 | orchestrator | 2025-05-17 01:39:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:16.427056 | orchestrator | 2025-05-17 01:39:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:16.427162 | orchestrator | 2025-05-17 01:39:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:19.482195 | orchestrator | 2025-05-17 01:39:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:19.482306 | orchestrator | 2025-05-17 01:39:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:22.528097 | orchestrator | 2025-05-17 01:39:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:22.528211 | orchestrator | 2025-05-17 01:39:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:25.577025 | orchestrator | 2025-05-17 01:39:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:25.577114 | orchestrator | 2025-05-17 01:39:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:28.623599 | orchestrator | 2025-05-17 01:39:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:28.623737 | orchestrator | 2025-05-17 01:39:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:31.665604 | orchestrator | 2025-05-17 01:39:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:31.665719 | orchestrator | 2025-05-17 01:39:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:34.717302 | orchestrator | 2025-05-17 01:39:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:34.717398 | orchestrator | 2025-05-17 01:39:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:37.768031 | orchestrator | 2025-05-17 01:39:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:37.768156 | orchestrator | 2025-05-17 01:39:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:40.814784 | orchestrator | 2025-05-17 01:39:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:40.814888 | orchestrator | 2025-05-17 01:39:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:43.866283 | orchestrator | 2025-05-17 01:39:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:43.866426 | orchestrator | 2025-05-17 01:39:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:46.916548 | orchestrator | 2025-05-17 01:39:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:46.916719 | orchestrator | 2025-05-17 01:39:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:49.968383 | orchestrator | 2025-05-17 01:39:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:49.968495 | orchestrator | 2025-05-17 01:39:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:53.017399 | orchestrator | 2025-05-17 01:39:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:53.018565 | orchestrator | 2025-05-17 01:39:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:56.067854 | orchestrator | 2025-05-17 01:39:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:56.067965 | orchestrator | 2025-05-17 01:39:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:39:59.114505 | orchestrator | 2025-05-17 01:39:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:39:59.114612 | orchestrator | 2025-05-17 01:39:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:02.160346 | orchestrator | 2025-05-17 01:40:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:02.160460 | orchestrator | 2025-05-17 01:40:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:05.198598 | orchestrator | 2025-05-17 01:40:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:05.198760 | orchestrator | 2025-05-17 01:40:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:08.247426 | orchestrator | 2025-05-17 01:40:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:08.247533 | orchestrator | 2025-05-17 01:40:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:11.306947 | orchestrator | 2025-05-17 01:40:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:11.307111 | orchestrator | 2025-05-17 01:40:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:14.352558 | orchestrator | 2025-05-17 01:40:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:14.352678 | orchestrator | 2025-05-17 01:40:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:17.399988 | orchestrator | 2025-05-17 01:40:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:17.400102 | orchestrator | 2025-05-17 01:40:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:20.448586 | orchestrator | 2025-05-17 01:40:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:20.448785 | orchestrator | 2025-05-17 01:40:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:23.498849 | orchestrator | 2025-05-17 01:40:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:23.499074 | orchestrator | 2025-05-17 01:40:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:26.553360 | orchestrator | 2025-05-17 01:40:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:26.553465 | orchestrator | 2025-05-17 01:40:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:29.602756 | orchestrator | 2025-05-17 01:40:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:29.602869 | orchestrator | 2025-05-17 01:40:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:32.649887 | orchestrator | 2025-05-17 01:40:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:32.650003 | orchestrator | 2025-05-17 01:40:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:35.695091 | orchestrator | 2025-05-17 01:40:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:35.695203 | orchestrator | 2025-05-17 01:40:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:38.744297 | orchestrator | 2025-05-17 01:40:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:38.744431 | orchestrator | 2025-05-17 01:40:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:41.793532 | orchestrator | 2025-05-17 01:40:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:41.793716 | orchestrator | 2025-05-17 01:40:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:44.842944 | orchestrator | 2025-05-17 01:40:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:44.843128 | orchestrator | 2025-05-17 01:40:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:47.888002 | orchestrator | 2025-05-17 01:40:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:47.888111 | orchestrator | 2025-05-17 01:40:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:50.926599 | orchestrator | 2025-05-17 01:40:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:50.926752 | orchestrator | 2025-05-17 01:40:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:53.980475 | orchestrator | 2025-05-17 01:40:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:53.980592 | orchestrator | 2025-05-17 01:40:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:40:57.026002 | orchestrator | 2025-05-17 01:40:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:40:57.026188 | orchestrator | 2025-05-17 01:40:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:00.069703 | orchestrator | 2025-05-17 01:41:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:00.069838 | orchestrator | 2025-05-17 01:41:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:03.120243 | orchestrator | 2025-05-17 01:41:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:03.120358 | orchestrator | 2025-05-17 01:41:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:06.170956 | orchestrator | 2025-05-17 01:41:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:06.171078 | orchestrator | 2025-05-17 01:41:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:09.224336 | orchestrator | 2025-05-17 01:41:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:09.224440 | orchestrator | 2025-05-17 01:41:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:12.271299 | orchestrator | 2025-05-17 01:41:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:12.271407 | orchestrator | 2025-05-17 01:41:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:15.330014 | orchestrator | 2025-05-17 01:41:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:15.330191 | orchestrator | 2025-05-17 01:41:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:18.382455 | orchestrator | 2025-05-17 01:41:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:18.382564 | orchestrator | 2025-05-17 01:41:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:21.431035 | orchestrator | 2025-05-17 01:41:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:21.431183 | orchestrator | 2025-05-17 01:41:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:24.482066 | orchestrator | 2025-05-17 01:41:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:24.482148 | orchestrator | 2025-05-17 01:41:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:27.532737 | orchestrator | 2025-05-17 01:41:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:27.532850 | orchestrator | 2025-05-17 01:41:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:30.575953 | orchestrator | 2025-05-17 01:41:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:30.576059 | orchestrator | 2025-05-17 01:41:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:33.622320 | orchestrator | 2025-05-17 01:41:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:33.622434 | orchestrator | 2025-05-17 01:41:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:36.670296 | orchestrator | 2025-05-17 01:41:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:36.670403 | orchestrator | 2025-05-17 01:41:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:39.717575 | orchestrator | 2025-05-17 01:41:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:39.717713 | orchestrator | 2025-05-17 01:41:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:42.769210 | orchestrator | 2025-05-17 01:41:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:42.769321 | orchestrator | 2025-05-17 01:41:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:45.820380 | orchestrator | 2025-05-17 01:41:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:45.820488 | orchestrator | 2025-05-17 01:41:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:48.869474 | orchestrator | 2025-05-17 01:41:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:48.869579 | orchestrator | 2025-05-17 01:41:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:51.918305 | orchestrator | 2025-05-17 01:41:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:51.918415 | orchestrator | 2025-05-17 01:41:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:54.967135 | orchestrator | 2025-05-17 01:41:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:54.967243 | orchestrator | 2025-05-17 01:41:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:41:58.014795 | orchestrator | 2025-05-17 01:41:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:41:58.014910 | orchestrator | 2025-05-17 01:41:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:01.061061 | orchestrator | 2025-05-17 01:42:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:01.061162 | orchestrator | 2025-05-17 01:42:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:04.105362 | orchestrator | 2025-05-17 01:42:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:04.105476 | orchestrator | 2025-05-17 01:42:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:07.153320 | orchestrator | 2025-05-17 01:42:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:07.153427 | orchestrator | 2025-05-17 01:42:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:10.195257 | orchestrator | 2025-05-17 01:42:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:10.195374 | orchestrator | 2025-05-17 01:42:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:13.237779 | orchestrator | 2025-05-17 01:42:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:13.237888 | orchestrator | 2025-05-17 01:42:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:16.286129 | orchestrator | 2025-05-17 01:42:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:16.286248 | orchestrator | 2025-05-17 01:42:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:19.336926 | orchestrator | 2025-05-17 01:42:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:19.337026 | orchestrator | 2025-05-17 01:42:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:22.382993 | orchestrator | 2025-05-17 01:42:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:22.383102 | orchestrator | 2025-05-17 01:42:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:25.430832 | orchestrator | 2025-05-17 01:42:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:25.430969 | orchestrator | 2025-05-17 01:42:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:28.473808 | orchestrator | 2025-05-17 01:42:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:28.473920 | orchestrator | 2025-05-17 01:42:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:31.514961 | orchestrator | 2025-05-17 01:42:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:31.515076 | orchestrator | 2025-05-17 01:42:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:34.563478 | orchestrator | 2025-05-17 01:42:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:34.563587 | orchestrator | 2025-05-17 01:42:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:37.614353 | orchestrator | 2025-05-17 01:42:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:37.614476 | orchestrator | 2025-05-17 01:42:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:40.660261 | orchestrator | 2025-05-17 01:42:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:40.660368 | orchestrator | 2025-05-17 01:42:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:43.699234 | orchestrator | 2025-05-17 01:42:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:43.699347 | orchestrator | 2025-05-17 01:42:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:46.745289 | orchestrator | 2025-05-17 01:42:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:46.745389 | orchestrator | 2025-05-17 01:42:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:49.795755 | orchestrator | 2025-05-17 01:42:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:49.795882 | orchestrator | 2025-05-17 01:42:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:52.843880 | orchestrator | 2025-05-17 01:42:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:52.843985 | orchestrator | 2025-05-17 01:42:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:55.891385 | orchestrator | 2025-05-17 01:42:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:55.891496 | orchestrator | 2025-05-17 01:42:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:42:58.938822 | orchestrator | 2025-05-17 01:42:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:42:58.938935 | orchestrator | 2025-05-17 01:42:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:01.985132 | orchestrator | 2025-05-17 01:43:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:01.985221 | orchestrator | 2025-05-17 01:43:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:05.043644 | orchestrator | 2025-05-17 01:43:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:05.043754 | orchestrator | 2025-05-17 01:43:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:08.094307 | orchestrator | 2025-05-17 01:43:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:08.094411 | orchestrator | 2025-05-17 01:43:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:11.143695 | orchestrator | 2025-05-17 01:43:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:11.143840 | orchestrator | 2025-05-17 01:43:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:14.188812 | orchestrator | 2025-05-17 01:43:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:14.188922 | orchestrator | 2025-05-17 01:43:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:17.239491 | orchestrator | 2025-05-17 01:43:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:17.239672 | orchestrator | 2025-05-17 01:43:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:20.285521 | orchestrator | 2025-05-17 01:43:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:20.285673 | orchestrator | 2025-05-17 01:43:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:23.326437 | orchestrator | 2025-05-17 01:43:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:23.326601 | orchestrator | 2025-05-17 01:43:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:26.370097 | orchestrator | 2025-05-17 01:43:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:26.370192 | orchestrator | 2025-05-17 01:43:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:29.420845 | orchestrator | 2025-05-17 01:43:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:29.420994 | orchestrator | 2025-05-17 01:43:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:32.458458 | orchestrator | 2025-05-17 01:43:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:32.458617 | orchestrator | 2025-05-17 01:43:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:35.509100 | orchestrator | 2025-05-17 01:43:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:35.509211 | orchestrator | 2025-05-17 01:43:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:38.559273 | orchestrator | 2025-05-17 01:43:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:38.559383 | orchestrator | 2025-05-17 01:43:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:41.607488 | orchestrator | 2025-05-17 01:43:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:41.607633 | orchestrator | 2025-05-17 01:43:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:44.660130 | orchestrator | 2025-05-17 01:43:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:44.660334 | orchestrator | 2025-05-17 01:43:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:47.718583 | orchestrator | 2025-05-17 01:43:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:47.718701 | orchestrator | 2025-05-17 01:43:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:50.754340 | orchestrator | 2025-05-17 01:43:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:50.754452 | orchestrator | 2025-05-17 01:43:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:53.799880 | orchestrator | 2025-05-17 01:43:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:53.799990 | orchestrator | 2025-05-17 01:43:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:56.848087 | orchestrator | 2025-05-17 01:43:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:56.848190 | orchestrator | 2025-05-17 01:43:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:43:59.889621 | orchestrator | 2025-05-17 01:43:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:43:59.889734 | orchestrator | 2025-05-17 01:43:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:02.938385 | orchestrator | 2025-05-17 01:44:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:02.938552 | orchestrator | 2025-05-17 01:44:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:05.995427 | orchestrator | 2025-05-17 01:44:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:05.995579 | orchestrator | 2025-05-17 01:44:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:09.037007 | orchestrator | 2025-05-17 01:44:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:09.037110 | orchestrator | 2025-05-17 01:44:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:12.090272 | orchestrator | 2025-05-17 01:44:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:12.090402 | orchestrator | 2025-05-17 01:44:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:15.134424 | orchestrator | 2025-05-17 01:44:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:15.134596 | orchestrator | 2025-05-17 01:44:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:18.181902 | orchestrator | 2025-05-17 01:44:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:18.182096 | orchestrator | 2025-05-17 01:44:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:21.231140 | orchestrator | 2025-05-17 01:44:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:21.231229 | orchestrator | 2025-05-17 01:44:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:24.274905 | orchestrator | 2025-05-17 01:44:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:24.275027 | orchestrator | 2025-05-17 01:44:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:27.327516 | orchestrator | 2025-05-17 01:44:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:27.327624 | orchestrator | 2025-05-17 01:44:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:30.366254 | orchestrator | 2025-05-17 01:44:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:30.366371 | orchestrator | 2025-05-17 01:44:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:33.415922 | orchestrator | 2025-05-17 01:44:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:33.416031 | orchestrator | 2025-05-17 01:44:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:36.456008 | orchestrator | 2025-05-17 01:44:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:36.456116 | orchestrator | 2025-05-17 01:44:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:39.499663 | orchestrator | 2025-05-17 01:44:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:39.499775 | orchestrator | 2025-05-17 01:44:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:42.557715 | orchestrator | 2025-05-17 01:44:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:42.557828 | orchestrator | 2025-05-17 01:44:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:45.603973 | orchestrator | 2025-05-17 01:44:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:45.604055 | orchestrator | 2025-05-17 01:44:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:48.653796 | orchestrator | 2025-05-17 01:44:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:48.653910 | orchestrator | 2025-05-17 01:44:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:51.698469 | orchestrator | 2025-05-17 01:44:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:51.698576 | orchestrator | 2025-05-17 01:44:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:54.754830 | orchestrator | 2025-05-17 01:44:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:54.755839 | orchestrator | 2025-05-17 01:44:54 | INFO  | Task 0d7cadc5-af76-455c-806b-4ee28f4bf275 is in state STARTED 2025-05-17 01:44:54.755980 | orchestrator | 2025-05-17 01:44:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:44:57.813157 | orchestrator | 2025-05-17 01:44:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:44:57.814280 | orchestrator | 2025-05-17 01:44:57 | INFO  | Task 0d7cadc5-af76-455c-806b-4ee28f4bf275 is in state STARTED 2025-05-17 01:44:57.814587 | orchestrator | 2025-05-17 01:44:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:00.870188 | orchestrator | 2025-05-17 01:45:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:00.872284 | orchestrator | 2025-05-17 01:45:00 | INFO  | Task 0d7cadc5-af76-455c-806b-4ee28f4bf275 is in state STARTED 2025-05-17 01:45:00.872334 | orchestrator | 2025-05-17 01:45:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:03.923116 | orchestrator | 2025-05-17 01:45:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:03.924349 | orchestrator | 2025-05-17 01:45:03 | INFO  | Task 0d7cadc5-af76-455c-806b-4ee28f4bf275 is in state SUCCESS 2025-05-17 01:45:03.924515 | orchestrator | 2025-05-17 01:45:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:06.974552 | orchestrator | 2025-05-17 01:45:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:06.974682 | orchestrator | 2025-05-17 01:45:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:10.024126 | orchestrator | 2025-05-17 01:45:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:10.024264 | orchestrator | 2025-05-17 01:45:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:13.067721 | orchestrator | 2025-05-17 01:45:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:13.067847 | orchestrator | 2025-05-17 01:45:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:16.124072 | orchestrator | 2025-05-17 01:45:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:16.124178 | orchestrator | 2025-05-17 01:45:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:19.172098 | orchestrator | 2025-05-17 01:45:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:19.172242 | orchestrator | 2025-05-17 01:45:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:22.222904 | orchestrator | 2025-05-17 01:45:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:22.223012 | orchestrator | 2025-05-17 01:45:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:25.269118 | orchestrator | 2025-05-17 01:45:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:25.269235 | orchestrator | 2025-05-17 01:45:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:28.309284 | orchestrator | 2025-05-17 01:45:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:28.309417 | orchestrator | 2025-05-17 01:45:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:31.362772 | orchestrator | 2025-05-17 01:45:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:31.362895 | orchestrator | 2025-05-17 01:45:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:34.414285 | orchestrator | 2025-05-17 01:45:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:34.414420 | orchestrator | 2025-05-17 01:45:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:37.462739 | orchestrator | 2025-05-17 01:45:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:37.462866 | orchestrator | 2025-05-17 01:45:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:40.507452 | orchestrator | 2025-05-17 01:45:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:40.507559 | orchestrator | 2025-05-17 01:45:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:43.550406 | orchestrator | 2025-05-17 01:45:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:43.550519 | orchestrator | 2025-05-17 01:45:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:46.594503 | orchestrator | 2025-05-17 01:45:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:46.595080 | orchestrator | 2025-05-17 01:45:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:49.635080 | orchestrator | 2025-05-17 01:45:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:49.635216 | orchestrator | 2025-05-17 01:45:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:52.694985 | orchestrator | 2025-05-17 01:45:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:52.695112 | orchestrator | 2025-05-17 01:45:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:55.736965 | orchestrator | 2025-05-17 01:45:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:55.737076 | orchestrator | 2025-05-17 01:45:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:45:58.783559 | orchestrator | 2025-05-17 01:45:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:45:58.783638 | orchestrator | 2025-05-17 01:45:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:01.832866 | orchestrator | 2025-05-17 01:46:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:01.832977 | orchestrator | 2025-05-17 01:46:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:04.877766 | orchestrator | 2025-05-17 01:46:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:04.877890 | orchestrator | 2025-05-17 01:46:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:07.931118 | orchestrator | 2025-05-17 01:46:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:07.931228 | orchestrator | 2025-05-17 01:46:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:10.985126 | orchestrator | 2025-05-17 01:46:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:10.985455 | orchestrator | 2025-05-17 01:46:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:14.032028 | orchestrator | 2025-05-17 01:46:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:14.032142 | orchestrator | 2025-05-17 01:46:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:17.074731 | orchestrator | 2025-05-17 01:46:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:17.074834 | orchestrator | 2025-05-17 01:46:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:20.123250 | orchestrator | 2025-05-17 01:46:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:20.123403 | orchestrator | 2025-05-17 01:46:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:23.170171 | orchestrator | 2025-05-17 01:46:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:23.170266 | orchestrator | 2025-05-17 01:46:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:26.214807 | orchestrator | 2025-05-17 01:46:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:26.214917 | orchestrator | 2025-05-17 01:46:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:29.257579 | orchestrator | 2025-05-17 01:46:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:29.257683 | orchestrator | 2025-05-17 01:46:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:32.302223 | orchestrator | 2025-05-17 01:46:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:32.302404 | orchestrator | 2025-05-17 01:46:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:35.354168 | orchestrator | 2025-05-17 01:46:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:35.354294 | orchestrator | 2025-05-17 01:46:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:38.404462 | orchestrator | 2025-05-17 01:46:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:38.404606 | orchestrator | 2025-05-17 01:46:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:41.458526 | orchestrator | 2025-05-17 01:46:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:41.458659 | orchestrator | 2025-05-17 01:46:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:44.505561 | orchestrator | 2025-05-17 01:46:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:44.505686 | orchestrator | 2025-05-17 01:46:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:47.554142 | orchestrator | 2025-05-17 01:46:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:47.554271 | orchestrator | 2025-05-17 01:46:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:50.604205 | orchestrator | 2025-05-17 01:46:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:50.604403 | orchestrator | 2025-05-17 01:46:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:53.652796 | orchestrator | 2025-05-17 01:46:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:53.652907 | orchestrator | 2025-05-17 01:46:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:56.700624 | orchestrator | 2025-05-17 01:46:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:56.700738 | orchestrator | 2025-05-17 01:46:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:46:59.752952 | orchestrator | 2025-05-17 01:46:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:46:59.753061 | orchestrator | 2025-05-17 01:46:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:02.806164 | orchestrator | 2025-05-17 01:47:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:02.806270 | orchestrator | 2025-05-17 01:47:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:05.851159 | orchestrator | 2025-05-17 01:47:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:05.851263 | orchestrator | 2025-05-17 01:47:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:08.902594 | orchestrator | 2025-05-17 01:47:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:08.902702 | orchestrator | 2025-05-17 01:47:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:11.951980 | orchestrator | 2025-05-17 01:47:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:11.952229 | orchestrator | 2025-05-17 01:47:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:15.000400 | orchestrator | 2025-05-17 01:47:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:15.000549 | orchestrator | 2025-05-17 01:47:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:18.041059 | orchestrator | 2025-05-17 01:47:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:18.041156 | orchestrator | 2025-05-17 01:47:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:21.085100 | orchestrator | 2025-05-17 01:47:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:21.085216 | orchestrator | 2025-05-17 01:47:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:24.140093 | orchestrator | 2025-05-17 01:47:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:24.140202 | orchestrator | 2025-05-17 01:47:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:27.193044 | orchestrator | 2025-05-17 01:47:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:27.193137 | orchestrator | 2025-05-17 01:47:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:30.249313 | orchestrator | 2025-05-17 01:47:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:30.249420 | orchestrator | 2025-05-17 01:47:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:33.293736 | orchestrator | 2025-05-17 01:47:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:33.293847 | orchestrator | 2025-05-17 01:47:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:36.339874 | orchestrator | 2025-05-17 01:47:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:36.339984 | orchestrator | 2025-05-17 01:47:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:39.388782 | orchestrator | 2025-05-17 01:47:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:39.388891 | orchestrator | 2025-05-17 01:47:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:42.431406 | orchestrator | 2025-05-17 01:47:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:42.431575 | orchestrator | 2025-05-17 01:47:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:45.480742 | orchestrator | 2025-05-17 01:47:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:45.480860 | orchestrator | 2025-05-17 01:47:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:48.527789 | orchestrator | 2025-05-17 01:47:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:48.527919 | orchestrator | 2025-05-17 01:47:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:51.577004 | orchestrator | 2025-05-17 01:47:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:51.577092 | orchestrator | 2025-05-17 01:47:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:54.625137 | orchestrator | 2025-05-17 01:47:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:54.625246 | orchestrator | 2025-05-17 01:47:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:47:57.673021 | orchestrator | 2025-05-17 01:47:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:47:57.673135 | orchestrator | 2025-05-17 01:47:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:00.723430 | orchestrator | 2025-05-17 01:48:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:00.723502 | orchestrator | 2025-05-17 01:48:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:03.770404 | orchestrator | 2025-05-17 01:48:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:03.770517 | orchestrator | 2025-05-17 01:48:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:06.823112 | orchestrator | 2025-05-17 01:48:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:06.823248 | orchestrator | 2025-05-17 01:48:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:09.873417 | orchestrator | 2025-05-17 01:48:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:09.873514 | orchestrator | 2025-05-17 01:48:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:12.924451 | orchestrator | 2025-05-17 01:48:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:12.924645 | orchestrator | 2025-05-17 01:48:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:15.970728 | orchestrator | 2025-05-17 01:48:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:15.971005 | orchestrator | 2025-05-17 01:48:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:19.013145 | orchestrator | 2025-05-17 01:48:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:19.013265 | orchestrator | 2025-05-17 01:48:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:22.054480 | orchestrator | 2025-05-17 01:48:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:22.054561 | orchestrator | 2025-05-17 01:48:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:25.101506 | orchestrator | 2025-05-17 01:48:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:25.101692 | orchestrator | 2025-05-17 01:48:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:28.143244 | orchestrator | 2025-05-17 01:48:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:28.143354 | orchestrator | 2025-05-17 01:48:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:31.189296 | orchestrator | 2025-05-17 01:48:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:31.189378 | orchestrator | 2025-05-17 01:48:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:34.236602 | orchestrator | 2025-05-17 01:48:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:34.236760 | orchestrator | 2025-05-17 01:48:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:37.277870 | orchestrator | 2025-05-17 01:48:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:37.277969 | orchestrator | 2025-05-17 01:48:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:40.330163 | orchestrator | 2025-05-17 01:48:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:40.330277 | orchestrator | 2025-05-17 01:48:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:43.381911 | orchestrator | 2025-05-17 01:48:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:43.382110 | orchestrator | 2025-05-17 01:48:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:46.430317 | orchestrator | 2025-05-17 01:48:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:46.430432 | orchestrator | 2025-05-17 01:48:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:49.479512 | orchestrator | 2025-05-17 01:48:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:49.479609 | orchestrator | 2025-05-17 01:48:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:52.529410 | orchestrator | 2025-05-17 01:48:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:52.529559 | orchestrator | 2025-05-17 01:48:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:55.575215 | orchestrator | 2025-05-17 01:48:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:55.575325 | orchestrator | 2025-05-17 01:48:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:48:58.624019 | orchestrator | 2025-05-17 01:48:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:48:58.624865 | orchestrator | 2025-05-17 01:48:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:01.669484 | orchestrator | 2025-05-17 01:49:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:01.669588 | orchestrator | 2025-05-17 01:49:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:04.708779 | orchestrator | 2025-05-17 01:49:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:04.708866 | orchestrator | 2025-05-17 01:49:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:07.759644 | orchestrator | 2025-05-17 01:49:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:07.759787 | orchestrator | 2025-05-17 01:49:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:10.806661 | orchestrator | 2025-05-17 01:49:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:10.806820 | orchestrator | 2025-05-17 01:49:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:13.857796 | orchestrator | 2025-05-17 01:49:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:13.857901 | orchestrator | 2025-05-17 01:49:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:16.908485 | orchestrator | 2025-05-17 01:49:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:16.908598 | orchestrator | 2025-05-17 01:49:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:19.955563 | orchestrator | 2025-05-17 01:49:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:19.955673 | orchestrator | 2025-05-17 01:49:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:23.013327 | orchestrator | 2025-05-17 01:49:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:23.013433 | orchestrator | 2025-05-17 01:49:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:26.064116 | orchestrator | 2025-05-17 01:49:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:26.064233 | orchestrator | 2025-05-17 01:49:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:29.109921 | orchestrator | 2025-05-17 01:49:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:29.110101 | orchestrator | 2025-05-17 01:49:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:32.154637 | orchestrator | 2025-05-17 01:49:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:32.154812 | orchestrator | 2025-05-17 01:49:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:35.202995 | orchestrator | 2025-05-17 01:49:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:35.203100 | orchestrator | 2025-05-17 01:49:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:38.257967 | orchestrator | 2025-05-17 01:49:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:38.258336 | orchestrator | 2025-05-17 01:49:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:41.308083 | orchestrator | 2025-05-17 01:49:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:41.308232 | orchestrator | 2025-05-17 01:49:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:44.346168 | orchestrator | 2025-05-17 01:49:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:44.346273 | orchestrator | 2025-05-17 01:49:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:47.396997 | orchestrator | 2025-05-17 01:49:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:47.397112 | orchestrator | 2025-05-17 01:49:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:50.436884 | orchestrator | 2025-05-17 01:49:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:50.436979 | orchestrator | 2025-05-17 01:49:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:53.483801 | orchestrator | 2025-05-17 01:49:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:53.483975 | orchestrator | 2025-05-17 01:49:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:56.533271 | orchestrator | 2025-05-17 01:49:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:56.533386 | orchestrator | 2025-05-17 01:49:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:49:59.588996 | orchestrator | 2025-05-17 01:49:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:49:59.589116 | orchestrator | 2025-05-17 01:49:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:02.631035 | orchestrator | 2025-05-17 01:50:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:02.631171 | orchestrator | 2025-05-17 01:50:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:05.684773 | orchestrator | 2025-05-17 01:50:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:05.685086 | orchestrator | 2025-05-17 01:50:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:08.735897 | orchestrator | 2025-05-17 01:50:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:08.736008 | orchestrator | 2025-05-17 01:50:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:11.782966 | orchestrator | 2025-05-17 01:50:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:11.783094 | orchestrator | 2025-05-17 01:50:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:14.835794 | orchestrator | 2025-05-17 01:50:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:14.835938 | orchestrator | 2025-05-17 01:50:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:17.884039 | orchestrator | 2025-05-17 01:50:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:17.884154 | orchestrator | 2025-05-17 01:50:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:20.933458 | orchestrator | 2025-05-17 01:50:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:20.933545 | orchestrator | 2025-05-17 01:50:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:23.985866 | orchestrator | 2025-05-17 01:50:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:23.986129 | orchestrator | 2025-05-17 01:50:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:27.032129 | orchestrator | 2025-05-17 01:50:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:27.032240 | orchestrator | 2025-05-17 01:50:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:30.077959 | orchestrator | 2025-05-17 01:50:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:30.078165 | orchestrator | 2025-05-17 01:50:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:33.128193 | orchestrator | 2025-05-17 01:50:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:33.128311 | orchestrator | 2025-05-17 01:50:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:36.178313 | orchestrator | 2025-05-17 01:50:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:36.178427 | orchestrator | 2025-05-17 01:50:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:39.229054 | orchestrator | 2025-05-17 01:50:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:39.229163 | orchestrator | 2025-05-17 01:50:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:42.281013 | orchestrator | 2025-05-17 01:50:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:42.281124 | orchestrator | 2025-05-17 01:50:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:45.334626 | orchestrator | 2025-05-17 01:50:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:45.334705 | orchestrator | 2025-05-17 01:50:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:48.377321 | orchestrator | 2025-05-17 01:50:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:48.379662 | orchestrator | 2025-05-17 01:50:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:51.426001 | orchestrator | 2025-05-17 01:50:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:51.426172 | orchestrator | 2025-05-17 01:50:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:54.476102 | orchestrator | 2025-05-17 01:50:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:54.476209 | orchestrator | 2025-05-17 01:50:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:50:57.519499 | orchestrator | 2025-05-17 01:50:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:50:57.519609 | orchestrator | 2025-05-17 01:50:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:00.567803 | orchestrator | 2025-05-17 01:51:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:00.567911 | orchestrator | 2025-05-17 01:51:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:03.614705 | orchestrator | 2025-05-17 01:51:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:03.614827 | orchestrator | 2025-05-17 01:51:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:06.661600 | orchestrator | 2025-05-17 01:51:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:06.661730 | orchestrator | 2025-05-17 01:51:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:09.706165 | orchestrator | 2025-05-17 01:51:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:09.706303 | orchestrator | 2025-05-17 01:51:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:12.755755 | orchestrator | 2025-05-17 01:51:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:12.755859 | orchestrator | 2025-05-17 01:51:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:15.803245 | orchestrator | 2025-05-17 01:51:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:15.803349 | orchestrator | 2025-05-17 01:51:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:18.848031 | orchestrator | 2025-05-17 01:51:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:18.848149 | orchestrator | 2025-05-17 01:51:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:21.902736 | orchestrator | 2025-05-17 01:51:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:21.902828 | orchestrator | 2025-05-17 01:51:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:24.953213 | orchestrator | 2025-05-17 01:51:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:24.953325 | orchestrator | 2025-05-17 01:51:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:28.011833 | orchestrator | 2025-05-17 01:51:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:28.011985 | orchestrator | 2025-05-17 01:51:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:31.060231 | orchestrator | 2025-05-17 01:51:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:31.060328 | orchestrator | 2025-05-17 01:51:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:34.107141 | orchestrator | 2025-05-17 01:51:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:34.107248 | orchestrator | 2025-05-17 01:51:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:37.152158 | orchestrator | 2025-05-17 01:51:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:37.152275 | orchestrator | 2025-05-17 01:51:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:40.195773 | orchestrator | 2025-05-17 01:51:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:40.195906 | orchestrator | 2025-05-17 01:51:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:43.242762 | orchestrator | 2025-05-17 01:51:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:43.242901 | orchestrator | 2025-05-17 01:51:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:46.296977 | orchestrator | 2025-05-17 01:51:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:46.297109 | orchestrator | 2025-05-17 01:51:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:49.343802 | orchestrator | 2025-05-17 01:51:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:49.343899 | orchestrator | 2025-05-17 01:51:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:52.384674 | orchestrator | 2025-05-17 01:51:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:52.384777 | orchestrator | 2025-05-17 01:51:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:55.438547 | orchestrator | 2025-05-17 01:51:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:55.438668 | orchestrator | 2025-05-17 01:51:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:51:58.482555 | orchestrator | 2025-05-17 01:51:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:51:58.482686 | orchestrator | 2025-05-17 01:51:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:01.538826 | orchestrator | 2025-05-17 01:52:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:01.538917 | orchestrator | 2025-05-17 01:52:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:04.585502 | orchestrator | 2025-05-17 01:52:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:04.585630 | orchestrator | 2025-05-17 01:52:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:07.636633 | orchestrator | 2025-05-17 01:52:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:07.636743 | orchestrator | 2025-05-17 01:52:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:10.686982 | orchestrator | 2025-05-17 01:52:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:10.687193 | orchestrator | 2025-05-17 01:52:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:13.735352 | orchestrator | 2025-05-17 01:52:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:13.735461 | orchestrator | 2025-05-17 01:52:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:16.787620 | orchestrator | 2025-05-17 01:52:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:16.787736 | orchestrator | 2025-05-17 01:52:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:19.835587 | orchestrator | 2025-05-17 01:52:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:19.835705 | orchestrator | 2025-05-17 01:52:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:22.881015 | orchestrator | 2025-05-17 01:52:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:22.881212 | orchestrator | 2025-05-17 01:52:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:25.928471 | orchestrator | 2025-05-17 01:52:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:25.928603 | orchestrator | 2025-05-17 01:52:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:28.976281 | orchestrator | 2025-05-17 01:52:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:28.976390 | orchestrator | 2025-05-17 01:52:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:32.022458 | orchestrator | 2025-05-17 01:52:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:32.022573 | orchestrator | 2025-05-17 01:52:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:35.064019 | orchestrator | 2025-05-17 01:52:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:35.064172 | orchestrator | 2025-05-17 01:52:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:38.114664 | orchestrator | 2025-05-17 01:52:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:38.114749 | orchestrator | 2025-05-17 01:52:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:41.165249 | orchestrator | 2025-05-17 01:52:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:41.165354 | orchestrator | 2025-05-17 01:52:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:44.212114 | orchestrator | 2025-05-17 01:52:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:44.212330 | orchestrator | 2025-05-17 01:52:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:47.267005 | orchestrator | 2025-05-17 01:52:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:47.267110 | orchestrator | 2025-05-17 01:52:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:50.307564 | orchestrator | 2025-05-17 01:52:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:50.307707 | orchestrator | 2025-05-17 01:52:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:53.355880 | orchestrator | 2025-05-17 01:52:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:53.355970 | orchestrator | 2025-05-17 01:52:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:56.396827 | orchestrator | 2025-05-17 01:52:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:56.396939 | orchestrator | 2025-05-17 01:52:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:52:59.444811 | orchestrator | 2025-05-17 01:52:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:52:59.444917 | orchestrator | 2025-05-17 01:52:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:02.488334 | orchestrator | 2025-05-17 01:53:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:02.488474 | orchestrator | 2025-05-17 01:53:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:05.534484 | orchestrator | 2025-05-17 01:53:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:05.534741 | orchestrator | 2025-05-17 01:53:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:08.587962 | orchestrator | 2025-05-17 01:53:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:08.588134 | orchestrator | 2025-05-17 01:53:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:11.643015 | orchestrator | 2025-05-17 01:53:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:11.643123 | orchestrator | 2025-05-17 01:53:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:14.693782 | orchestrator | 2025-05-17 01:53:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:14.693901 | orchestrator | 2025-05-17 01:53:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:17.747164 | orchestrator | 2025-05-17 01:53:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:17.747303 | orchestrator | 2025-05-17 01:53:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:20.804817 | orchestrator | 2025-05-17 01:53:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:20.804934 | orchestrator | 2025-05-17 01:53:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:23.861251 | orchestrator | 2025-05-17 01:53:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:23.861379 | orchestrator | 2025-05-17 01:53:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:26.918554 | orchestrator | 2025-05-17 01:53:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:26.918665 | orchestrator | 2025-05-17 01:53:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:29.977605 | orchestrator | 2025-05-17 01:53:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:29.977712 | orchestrator | 2025-05-17 01:53:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:33.029658 | orchestrator | 2025-05-17 01:53:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:33.029785 | orchestrator | 2025-05-17 01:53:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:36.072392 | orchestrator | 2025-05-17 01:53:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:36.072497 | orchestrator | 2025-05-17 01:53:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:39.116710 | orchestrator | 2025-05-17 01:53:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:39.116821 | orchestrator | 2025-05-17 01:53:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:42.164829 | orchestrator | 2025-05-17 01:53:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:42.164941 | orchestrator | 2025-05-17 01:53:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:45.217452 | orchestrator | 2025-05-17 01:53:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:45.217578 | orchestrator | 2025-05-17 01:53:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:48.266295 | orchestrator | 2025-05-17 01:53:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:48.266400 | orchestrator | 2025-05-17 01:53:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:51.313829 | orchestrator | 2025-05-17 01:53:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:51.313942 | orchestrator | 2025-05-17 01:53:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:54.366352 | orchestrator | 2025-05-17 01:53:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:54.366458 | orchestrator | 2025-05-17 01:53:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:53:57.414086 | orchestrator | 2025-05-17 01:53:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:53:57.414189 | orchestrator | 2025-05-17 01:53:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:00.455118 | orchestrator | 2025-05-17 01:54:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:00.455194 | orchestrator | 2025-05-17 01:54:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:03.502462 | orchestrator | 2025-05-17 01:54:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:03.502569 | orchestrator | 2025-05-17 01:54:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:06.553420 | orchestrator | 2025-05-17 01:54:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:06.553527 | orchestrator | 2025-05-17 01:54:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:09.600519 | orchestrator | 2025-05-17 01:54:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:09.600626 | orchestrator | 2025-05-17 01:54:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:12.648015 | orchestrator | 2025-05-17 01:54:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:12.648125 | orchestrator | 2025-05-17 01:54:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:15.693546 | orchestrator | 2025-05-17 01:54:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:15.693658 | orchestrator | 2025-05-17 01:54:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:18.744698 | orchestrator | 2025-05-17 01:54:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:18.744816 | orchestrator | 2025-05-17 01:54:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:21.795943 | orchestrator | 2025-05-17 01:54:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:21.796043 | orchestrator | 2025-05-17 01:54:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:24.847963 | orchestrator | 2025-05-17 01:54:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:24.848070 | orchestrator | 2025-05-17 01:54:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:27.898441 | orchestrator | 2025-05-17 01:54:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:27.898521 | orchestrator | 2025-05-17 01:54:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:30.945844 | orchestrator | 2025-05-17 01:54:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:30.945979 | orchestrator | 2025-05-17 01:54:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:33.994690 | orchestrator | 2025-05-17 01:54:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:33.994807 | orchestrator | 2025-05-17 01:54:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:37.044354 | orchestrator | 2025-05-17 01:54:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:37.044437 | orchestrator | 2025-05-17 01:54:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:40.093365 | orchestrator | 2025-05-17 01:54:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:40.093475 | orchestrator | 2025-05-17 01:54:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:43.137394 | orchestrator | 2025-05-17 01:54:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:43.137495 | orchestrator | 2025-05-17 01:54:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:46.188791 | orchestrator | 2025-05-17 01:54:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:46.188905 | orchestrator | 2025-05-17 01:54:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:49.230726 | orchestrator | 2025-05-17 01:54:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:49.230847 | orchestrator | 2025-05-17 01:54:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:52.279769 | orchestrator | 2025-05-17 01:54:52 | INFO  | Task fdb5b176-560f-4ccc-a51d-e5311cf72fbb is in state STARTED 2025-05-17 01:54:52.281150 | orchestrator | 2025-05-17 01:54:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:52.281207 | orchestrator | 2025-05-17 01:54:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:55.353972 | orchestrator | 2025-05-17 01:54:55 | INFO  | Task fdb5b176-560f-4ccc-a51d-e5311cf72fbb is in state STARTED 2025-05-17 01:54:55.358153 | orchestrator | 2025-05-17 01:54:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:55.358214 | orchestrator | 2025-05-17 01:54:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:54:58.401502 | orchestrator | 2025-05-17 01:54:58 | INFO  | Task fdb5b176-560f-4ccc-a51d-e5311cf72fbb is in state STARTED 2025-05-17 01:54:58.403594 | orchestrator | 2025-05-17 01:54:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:54:58.403734 | orchestrator | 2025-05-17 01:54:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:01.451958 | orchestrator | 2025-05-17 01:55:01 | INFO  | Task fdb5b176-560f-4ccc-a51d-e5311cf72fbb is in state STARTED 2025-05-17 01:55:01.452239 | orchestrator | 2025-05-17 01:55:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:01.452266 | orchestrator | 2025-05-17 01:55:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:04.498397 | orchestrator | 2025-05-17 01:55:04 | INFO  | Task fdb5b176-560f-4ccc-a51d-e5311cf72fbb is in state SUCCESS 2025-05-17 01:55:04.498512 | orchestrator | 2025-05-17 01:55:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:04.498528 | orchestrator | 2025-05-17 01:55:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:07.554383 | orchestrator | 2025-05-17 01:55:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:07.554491 | orchestrator | 2025-05-17 01:55:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:10.602419 | orchestrator | 2025-05-17 01:55:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:10.602551 | orchestrator | 2025-05-17 01:55:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:13.657173 | orchestrator | 2025-05-17 01:55:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:13.657283 | orchestrator | 2025-05-17 01:55:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:16.709677 | orchestrator | 2025-05-17 01:55:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:16.709813 | orchestrator | 2025-05-17 01:55:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:19.757420 | orchestrator | 2025-05-17 01:55:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:19.757525 | orchestrator | 2025-05-17 01:55:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:22.808627 | orchestrator | 2025-05-17 01:55:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:22.808734 | orchestrator | 2025-05-17 01:55:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:25.854317 | orchestrator | 2025-05-17 01:55:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:25.854474 | orchestrator | 2025-05-17 01:55:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:28.904206 | orchestrator | 2025-05-17 01:55:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:28.904325 | orchestrator | 2025-05-17 01:55:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:31.950620 | orchestrator | 2025-05-17 01:55:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:31.950727 | orchestrator | 2025-05-17 01:55:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:34.997142 | orchestrator | 2025-05-17 01:55:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:34.997286 | orchestrator | 2025-05-17 01:55:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:38.038225 | orchestrator | 2025-05-17 01:55:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:38.038331 | orchestrator | 2025-05-17 01:55:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:41.086190 | orchestrator | 2025-05-17 01:55:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:41.086307 | orchestrator | 2025-05-17 01:55:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:44.126804 | orchestrator | 2025-05-17 01:55:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:44.126905 | orchestrator | 2025-05-17 01:55:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:47.177259 | orchestrator | 2025-05-17 01:55:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:47.177451 | orchestrator | 2025-05-17 01:55:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:50.228282 | orchestrator | 2025-05-17 01:55:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:50.228415 | orchestrator | 2025-05-17 01:55:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:53.273755 | orchestrator | 2025-05-17 01:55:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:53.273840 | orchestrator | 2025-05-17 01:55:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:56.322265 | orchestrator | 2025-05-17 01:55:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:56.322415 | orchestrator | 2025-05-17 01:55:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:55:59.368744 | orchestrator | 2025-05-17 01:55:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:55:59.368863 | orchestrator | 2025-05-17 01:55:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:02.416010 | orchestrator | 2025-05-17 01:56:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:02.416124 | orchestrator | 2025-05-17 01:56:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:05.461621 | orchestrator | 2025-05-17 01:56:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:05.461739 | orchestrator | 2025-05-17 01:56:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:08.503347 | orchestrator | 2025-05-17 01:56:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:08.503584 | orchestrator | 2025-05-17 01:56:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:11.552318 | orchestrator | 2025-05-17 01:56:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:11.552470 | orchestrator | 2025-05-17 01:56:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:14.605307 | orchestrator | 2025-05-17 01:56:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:14.605461 | orchestrator | 2025-05-17 01:56:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:17.663773 | orchestrator | 2025-05-17 01:56:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:17.663888 | orchestrator | 2025-05-17 01:56:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:20.712062 | orchestrator | 2025-05-17 01:56:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:20.712169 | orchestrator | 2025-05-17 01:56:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:23.756804 | orchestrator | 2025-05-17 01:56:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:23.756919 | orchestrator | 2025-05-17 01:56:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:26.803636 | orchestrator | 2025-05-17 01:56:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:26.803743 | orchestrator | 2025-05-17 01:56:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:29.845840 | orchestrator | 2025-05-17 01:56:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:29.845935 | orchestrator | 2025-05-17 01:56:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:32.894621 | orchestrator | 2025-05-17 01:56:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:32.894734 | orchestrator | 2025-05-17 01:56:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:35.944783 | orchestrator | 2025-05-17 01:56:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:35.944898 | orchestrator | 2025-05-17 01:56:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:38.988178 | orchestrator | 2025-05-17 01:56:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:38.988320 | orchestrator | 2025-05-17 01:56:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:42.030336 | orchestrator | 2025-05-17 01:56:42 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:42.030481 | orchestrator | 2025-05-17 01:56:42 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:45.081727 | orchestrator | 2025-05-17 01:56:45 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:45.081799 | orchestrator | 2025-05-17 01:56:45 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:48.135408 | orchestrator | 2025-05-17 01:56:48 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:48.135593 | orchestrator | 2025-05-17 01:56:48 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:51.190849 | orchestrator | 2025-05-17 01:56:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:51.190940 | orchestrator | 2025-05-17 01:56:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:54.243245 | orchestrator | 2025-05-17 01:56:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:54.243340 | orchestrator | 2025-05-17 01:56:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:56:57.289481 | orchestrator | 2025-05-17 01:56:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:56:57.289613 | orchestrator | 2025-05-17 01:56:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:00.345356 | orchestrator | 2025-05-17 01:57:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:00.345540 | orchestrator | 2025-05-17 01:57:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:03.395601 | orchestrator | 2025-05-17 01:57:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:03.395708 | orchestrator | 2025-05-17 01:57:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:06.441932 | orchestrator | 2025-05-17 01:57:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:06.442142 | orchestrator | 2025-05-17 01:57:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:09.491100 | orchestrator | 2025-05-17 01:57:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:09.491210 | orchestrator | 2025-05-17 01:57:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:12.537414 | orchestrator | 2025-05-17 01:57:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:12.537579 | orchestrator | 2025-05-17 01:57:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:15.592027 | orchestrator | 2025-05-17 01:57:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:15.592130 | orchestrator | 2025-05-17 01:57:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:18.639606 | orchestrator | 2025-05-17 01:57:18 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:18.639742 | orchestrator | 2025-05-17 01:57:18 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:21.693653 | orchestrator | 2025-05-17 01:57:21 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:21.693758 | orchestrator | 2025-05-17 01:57:21 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:24.750269 | orchestrator | 2025-05-17 01:57:24 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:24.750390 | orchestrator | 2025-05-17 01:57:24 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:27.798341 | orchestrator | 2025-05-17 01:57:27 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:27.798448 | orchestrator | 2025-05-17 01:57:27 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:30.842349 | orchestrator | 2025-05-17 01:57:30 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:30.842558 | orchestrator | 2025-05-17 01:57:30 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:33.889403 | orchestrator | 2025-05-17 01:57:33 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:33.889563 | orchestrator | 2025-05-17 01:57:33 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:36.945779 | orchestrator | 2025-05-17 01:57:36 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:36.945919 | orchestrator | 2025-05-17 01:57:36 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:39.991001 | orchestrator | 2025-05-17 01:57:39 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:39.991106 | orchestrator | 2025-05-17 01:57:39 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:43.035069 | orchestrator | 2025-05-17 01:57:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:43.035180 | orchestrator | 2025-05-17 01:57:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:46.082205 | orchestrator | 2025-05-17 01:57:46 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:46.082304 | orchestrator | 2025-05-17 01:57:46 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:49.126730 | orchestrator | 2025-05-17 01:57:49 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:49.126868 | orchestrator | 2025-05-17 01:57:49 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:52.170792 | orchestrator | 2025-05-17 01:57:52 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:52.170900 | orchestrator | 2025-05-17 01:57:52 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:55.225125 | orchestrator | 2025-05-17 01:57:55 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:55.225260 | orchestrator | 2025-05-17 01:57:55 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:57:58.269750 | orchestrator | 2025-05-17 01:57:58 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:57:58.269910 | orchestrator | 2025-05-17 01:57:58 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:01.314443 | orchestrator | 2025-05-17 01:58:01 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:01.314597 | orchestrator | 2025-05-17 01:58:01 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:04.363036 | orchestrator | 2025-05-17 01:58:04 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:04.363158 | orchestrator | 2025-05-17 01:58:04 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:07.410912 | orchestrator | 2025-05-17 01:58:07 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:07.411029 | orchestrator | 2025-05-17 01:58:07 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:10.462930 | orchestrator | 2025-05-17 01:58:10 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:10.463040 | orchestrator | 2025-05-17 01:58:10 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:13.508485 | orchestrator | 2025-05-17 01:58:13 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:13.508651 | orchestrator | 2025-05-17 01:58:13 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:16.559359 | orchestrator | 2025-05-17 01:58:16 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:16.559463 | orchestrator | 2025-05-17 01:58:16 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:19.604934 | orchestrator | 2025-05-17 01:58:19 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:19.605528 | orchestrator | 2025-05-17 01:58:19 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:22.650978 | orchestrator | 2025-05-17 01:58:22 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:22.651085 | orchestrator | 2025-05-17 01:58:22 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:25.701933 | orchestrator | 2025-05-17 01:58:25 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:25.702105 | orchestrator | 2025-05-17 01:58:25 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:28.743147 | orchestrator | 2025-05-17 01:58:28 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:28.743258 | orchestrator | 2025-05-17 01:58:28 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:31.792742 | orchestrator | 2025-05-17 01:58:31 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:31.792859 | orchestrator | 2025-05-17 01:58:31 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:34.847252 | orchestrator | 2025-05-17 01:58:34 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:34.847325 | orchestrator | 2025-05-17 01:58:34 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:37.891737 | orchestrator | 2025-05-17 01:58:37 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:37.891875 | orchestrator | 2025-05-17 01:58:37 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:40.939453 | orchestrator | 2025-05-17 01:58:40 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:40.939622 | orchestrator | 2025-05-17 01:58:40 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:43.987434 | orchestrator | 2025-05-17 01:58:43 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:43.987619 | orchestrator | 2025-05-17 01:58:43 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:47.037438 | orchestrator | 2025-05-17 01:58:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:47.037596 | orchestrator | 2025-05-17 01:58:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:50.087378 | orchestrator | 2025-05-17 01:58:50 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:50.087496 | orchestrator | 2025-05-17 01:58:50 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:53.141470 | orchestrator | 2025-05-17 01:58:53 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:53.141618 | orchestrator | 2025-05-17 01:58:53 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:56.188077 | orchestrator | 2025-05-17 01:58:56 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:56.188216 | orchestrator | 2025-05-17 01:58:56 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:58:59.228562 | orchestrator | 2025-05-17 01:58:59 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:58:59.228673 | orchestrator | 2025-05-17 01:58:59 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:02.277628 | orchestrator | 2025-05-17 01:59:02 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:02.277762 | orchestrator | 2025-05-17 01:59:02 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:05.320376 | orchestrator | 2025-05-17 01:59:05 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:05.320473 | orchestrator | 2025-05-17 01:59:05 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:08.370785 | orchestrator | 2025-05-17 01:59:08 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:08.370902 | orchestrator | 2025-05-17 01:59:08 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:11.422125 | orchestrator | 2025-05-17 01:59:11 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:11.422263 | orchestrator | 2025-05-17 01:59:11 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:14.467403 | orchestrator | 2025-05-17 01:59:14 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:14.467639 | orchestrator | 2025-05-17 01:59:14 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:17.513829 | orchestrator | 2025-05-17 01:59:17 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:17.513939 | orchestrator | 2025-05-17 01:59:17 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:20.563334 | orchestrator | 2025-05-17 01:59:20 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:20.563502 | orchestrator | 2025-05-17 01:59:20 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:23.609594 | orchestrator | 2025-05-17 01:59:23 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:23.609701 | orchestrator | 2025-05-17 01:59:23 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:26.658128 | orchestrator | 2025-05-17 01:59:26 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:26.658266 | orchestrator | 2025-05-17 01:59:26 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:29.704386 | orchestrator | 2025-05-17 01:59:29 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:29.704551 | orchestrator | 2025-05-17 01:59:29 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:32.755346 | orchestrator | 2025-05-17 01:59:32 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:32.755477 | orchestrator | 2025-05-17 01:59:32 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:35.805136 | orchestrator | 2025-05-17 01:59:35 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:35.805238 | orchestrator | 2025-05-17 01:59:35 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:38.849831 | orchestrator | 2025-05-17 01:59:38 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:38.849946 | orchestrator | 2025-05-17 01:59:38 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:41.896073 | orchestrator | 2025-05-17 01:59:41 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:41.896210 | orchestrator | 2025-05-17 01:59:41 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:44.945225 | orchestrator | 2025-05-17 01:59:44 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:44.945334 | orchestrator | 2025-05-17 01:59:44 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:48.001147 | orchestrator | 2025-05-17 01:59:47 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:48.001239 | orchestrator | 2025-05-17 01:59:47 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:51.045354 | orchestrator | 2025-05-17 01:59:51 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:51.045476 | orchestrator | 2025-05-17 01:59:51 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:54.089335 | orchestrator | 2025-05-17 01:59:54 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:54.089483 | orchestrator | 2025-05-17 01:59:54 | INFO  | Wait 1 second(s) until the next check 2025-05-17 01:59:57.136586 | orchestrator | 2025-05-17 01:59:57 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 01:59:57.136697 | orchestrator | 2025-05-17 01:59:57 | INFO  | Wait 1 second(s) until the next check 2025-05-17 02:00:00.183920 | orchestrator | 2025-05-17 02:00:00 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 02:00:00.184076 | orchestrator | 2025-05-17 02:00:00 | INFO  | Wait 1 second(s) until the next check 2025-05-17 02:00:03.221522 | orchestrator | 2025-05-17 02:00:03 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 02:00:03.221634 | orchestrator | 2025-05-17 02:00:03 | INFO  | Wait 1 second(s) until the next check 2025-05-17 02:00:06.269391 | orchestrator | 2025-05-17 02:00:06 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 02:00:06.269514 | orchestrator | 2025-05-17 02:00:06 | INFO  | Wait 1 second(s) until the next check 2025-05-17 02:00:09.320937 | orchestrator | 2025-05-17 02:00:09 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 02:00:09.321049 | orchestrator | 2025-05-17 02:00:09 | INFO  | Wait 1 second(s) until the next check 2025-05-17 02:00:12.369012 | orchestrator | 2025-05-17 02:00:12 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 02:00:12.369117 | orchestrator | 2025-05-17 02:00:12 | INFO  | Wait 1 second(s) until the next check 2025-05-17 02:00:15.417323 | orchestrator | 2025-05-17 02:00:15 | INFO  | Task dd82ba6a-9aa0-4607-8198-d19da9595ea1 is in state STARTED 2025-05-17 02:00:15.417414 | orchestrator | 2025-05-17 02:00:15 | INFO  | Wait 1 second(s) until the next check 2025-05-17 02:00:17.722646 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-17 02:00:17.725347 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-17 02:00:18.488261 | 2025-05-17 02:00:18.488439 | PLAY [Post output play] 2025-05-17 02:00:18.505512 | 2025-05-17 02:00:18.505696 | LOOP [stage-output : Register sources] 2025-05-17 02:00:18.585908 | 2025-05-17 02:00:18.586193 | TASK [stage-output : Check sudo] 2025-05-17 02:00:19.496079 | orchestrator | sudo: a password is required 2025-05-17 02:00:19.627575 | orchestrator | ok: Runtime: 0:00:00.015620 2025-05-17 02:00:19.641164 | 2025-05-17 02:00:19.641311 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-17 02:00:19.684581 | 2025-05-17 02:00:19.684904 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-17 02:00:19.757024 | orchestrator | ok 2025-05-17 02:00:19.765788 | 2025-05-17 02:00:19.765968 | LOOP [stage-output : Ensure target folders exist] 2025-05-17 02:00:20.241815 | orchestrator | ok: "docs" 2025-05-17 02:00:20.242342 | 2025-05-17 02:00:20.493619 | orchestrator | ok: "artifacts" 2025-05-17 02:00:20.741375 | orchestrator | ok: "logs" 2025-05-17 02:00:20.765078 | 2025-05-17 02:00:20.765269 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-17 02:00:20.807645 | 2025-05-17 02:00:20.808001 | TASK [stage-output : Make all log files readable] 2025-05-17 02:00:21.113096 | orchestrator | ok 2025-05-17 02:00:21.122383 | 2025-05-17 02:00:21.122536 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-17 02:00:21.157446 | orchestrator | skipping: Conditional result was False 2025-05-17 02:00:21.167684 | 2025-05-17 02:00:21.167855 | TASK [stage-output : Discover log files for compression] 2025-05-17 02:00:21.193601 | orchestrator | skipping: Conditional result was False 2025-05-17 02:00:21.210688 | 2025-05-17 02:00:21.210903 | LOOP [stage-output : Archive everything from logs] 2025-05-17 02:00:21.281034 | 2025-05-17 02:00:21.281396 | PLAY [Post cleanup play] 2025-05-17 02:00:21.296342 | 2025-05-17 02:00:21.296635 | TASK [Set cloud fact (Zuul deployment)] 2025-05-17 02:00:21.359860 | orchestrator | ok 2025-05-17 02:00:21.372072 | 2025-05-17 02:00:21.372213 | TASK [Set cloud fact (local deployment)] 2025-05-17 02:00:21.408650 | orchestrator | skipping: Conditional result was False 2025-05-17 02:00:21.420212 | 2025-05-17 02:00:21.420345 | TASK [Clean the cloud environment] 2025-05-17 02:00:22.056659 | orchestrator | 2025-05-17 02:00:22 - clean up servers 2025-05-17 02:00:22.802820 | orchestrator | 2025-05-17 02:00:22 - testbed-manager 2025-05-17 02:00:22.908467 | orchestrator | 2025-05-17 02:00:22 - testbed-node-5 2025-05-17 02:00:23.011190 | orchestrator | 2025-05-17 02:00:23 - testbed-node-3 2025-05-17 02:00:23.122252 | orchestrator | 2025-05-17 02:00:23 - testbed-node-4 2025-05-17 02:00:23.226211 | orchestrator | 2025-05-17 02:00:23 - testbed-node-2 2025-05-17 02:00:23.333122 | orchestrator | 2025-05-17 02:00:23 - testbed-node-1 2025-05-17 02:00:23.446108 | orchestrator | 2025-05-17 02:00:23 - testbed-node-0 2025-05-17 02:00:23.538335 | orchestrator | 2025-05-17 02:00:23 - clean up keypairs 2025-05-17 02:00:23.555996 | orchestrator | 2025-05-17 02:00:23 - testbed 2025-05-17 02:00:23.584291 | orchestrator | 2025-05-17 02:00:23 - wait for servers to be gone 2025-05-17 02:00:34.529424 | orchestrator | 2025-05-17 02:00:34 - clean up ports 2025-05-17 02:00:34.719289 | orchestrator | 2025-05-17 02:00:34 - 17f0277b-5b65-47a4-887b-18a8b0475fe9 2025-05-17 02:00:35.001549 | orchestrator | 2025-05-17 02:00:35 - 1a5c0a29-1392-40f4-9cec-dc068a4158e3 2025-05-17 02:00:35.255025 | orchestrator | 2025-05-17 02:00:35 - 2d07c8ea-6993-489e-a48a-8730f9fd5482 2025-05-17 02:00:35.671773 | orchestrator | 2025-05-17 02:00:35 - 66db5433-ed98-4b17-af95-09b1b83a8651 2025-05-17 02:00:35.855701 | orchestrator | 2025-05-17 02:00:35 - 95a19887-5c1b-40e3-9c78-1643068e3936 2025-05-17 02:00:36.080312 | orchestrator | 2025-05-17 02:00:36 - be52ba64-7fde-443b-85e8-801918a53449 2025-05-17 02:00:36.349402 | orchestrator | 2025-05-17 02:00:36 - dd34f0c4-4e38-4973-a767-764ba0e4d6f2 2025-05-17 02:00:36.529821 | orchestrator | 2025-05-17 02:00:36 - clean up volumes 2025-05-17 02:00:36.645009 | orchestrator | 2025-05-17 02:00:36 - testbed-volume-manager-base 2025-05-17 02:00:36.683214 | orchestrator | 2025-05-17 02:00:36 - testbed-volume-2-node-base 2025-05-17 02:00:36.723488 | orchestrator | 2025-05-17 02:00:36 - testbed-volume-3-node-base 2025-05-17 02:00:36.768125 | orchestrator | 2025-05-17 02:00:36 - testbed-volume-1-node-base 2025-05-17 02:00:36.811080 | orchestrator | 2025-05-17 02:00:36 - testbed-volume-0-node-base 2025-05-17 02:00:36.854108 | orchestrator | 2025-05-17 02:00:36 - testbed-volume-5-node-base 2025-05-17 02:00:36.902953 | orchestrator | 2025-05-17 02:00:36 - testbed-volume-4-node-base 2025-05-17 02:00:36.944304 | orchestrator | 2025-05-17 02:00:36 - testbed-volume-4-node-4 2025-05-17 02:00:36.986955 | orchestrator | 2025-05-17 02:00:36 - testbed-volume-2-node-5 2025-05-17 02:00:37.030713 | orchestrator | 2025-05-17 02:00:37 - testbed-volume-7-node-4 2025-05-17 02:00:37.079151 | orchestrator | 2025-05-17 02:00:37 - testbed-volume-8-node-5 2025-05-17 02:00:37.124538 | orchestrator | 2025-05-17 02:00:37 - testbed-volume-1-node-4 2025-05-17 02:00:37.169745 | orchestrator | 2025-05-17 02:00:37 - testbed-volume-6-node-3 2025-05-17 02:00:37.212482 | orchestrator | 2025-05-17 02:00:37 - testbed-volume-5-node-5 2025-05-17 02:00:37.254365 | orchestrator | 2025-05-17 02:00:37 - testbed-volume-0-node-3 2025-05-17 02:00:37.293906 | orchestrator | 2025-05-17 02:00:37 - testbed-volume-3-node-3 2025-05-17 02:00:37.333667 | orchestrator | 2025-05-17 02:00:37 - disconnect routers 2025-05-17 02:00:37.446157 | orchestrator | 2025-05-17 02:00:37 - testbed 2025-05-17 02:00:38.370940 | orchestrator | 2025-05-17 02:00:38 - clean up subnets 2025-05-17 02:00:38.408887 | orchestrator | 2025-05-17 02:00:38 - subnet-testbed-management 2025-05-17 02:00:38.578951 | orchestrator | 2025-05-17 02:00:38 - clean up networks 2025-05-17 02:00:38.755606 | orchestrator | 2025-05-17 02:00:38 - net-testbed-management 2025-05-17 02:00:39.033984 | orchestrator | 2025-05-17 02:00:39 - clean up security groups 2025-05-17 02:00:39.078833 | orchestrator | 2025-05-17 02:00:39 - testbed-management 2025-05-17 02:00:39.191682 | orchestrator | 2025-05-17 02:00:39 - testbed-node 2025-05-17 02:00:39.298435 | orchestrator | 2025-05-17 02:00:39 - clean up floating ips 2025-05-17 02:00:39.331427 | orchestrator | 2025-05-17 02:00:39 - 81.163.192.54 2025-05-17 02:00:39.678138 | orchestrator | 2025-05-17 02:00:39 - clean up routers 2025-05-17 02:00:39.785059 | orchestrator | 2025-05-17 02:00:39 - testbed 2025-05-17 02:00:41.013946 | orchestrator | ok: Runtime: 0:00:18.795538 2025-05-17 02:00:41.017270 | 2025-05-17 02:00:41.017400 | PLAY RECAP 2025-05-17 02:00:41.017493 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-17 02:00:41.017535 | 2025-05-17 02:00:41.159440 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-17 02:00:41.160476 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-17 02:00:41.892473 | 2025-05-17 02:00:41.892654 | PLAY [Cleanup play] 2025-05-17 02:00:41.912342 | 2025-05-17 02:00:41.912579 | TASK [Set cloud fact (Zuul deployment)] 2025-05-17 02:00:41.976244 | orchestrator | ok 2025-05-17 02:00:41.984565 | 2025-05-17 02:00:41.984753 | TASK [Set cloud fact (local deployment)] 2025-05-17 02:00:42.032223 | orchestrator | skipping: Conditional result was False 2025-05-17 02:00:42.048436 | 2025-05-17 02:00:42.048673 | TASK [Clean the cloud environment] 2025-05-17 02:00:43.219381 | orchestrator | 2025-05-17 02:00:43 - clean up servers 2025-05-17 02:00:43.678810 | orchestrator | 2025-05-17 02:00:43 - clean up keypairs 2025-05-17 02:00:43.692884 | orchestrator | 2025-05-17 02:00:43 - wait for servers to be gone 2025-05-17 02:00:43.732279 | orchestrator | 2025-05-17 02:00:43 - clean up ports 2025-05-17 02:00:43.804085 | orchestrator | 2025-05-17 02:00:43 - clean up volumes 2025-05-17 02:00:43.863099 | orchestrator | 2025-05-17 02:00:43 - disconnect routers 2025-05-17 02:00:43.891277 | orchestrator | 2025-05-17 02:00:43 - clean up subnets 2025-05-17 02:00:43.909317 | orchestrator | 2025-05-17 02:00:43 - clean up networks 2025-05-17 02:00:44.067057 | orchestrator | 2025-05-17 02:00:44 - clean up security groups 2025-05-17 02:00:44.102558 | orchestrator | 2025-05-17 02:00:44 - clean up floating ips 2025-05-17 02:00:44.124186 | orchestrator | 2025-05-17 02:00:44 - clean up routers 2025-05-17 02:00:44.590356 | orchestrator | ok: Runtime: 0:00:01.287267 2025-05-17 02:00:44.594573 | 2025-05-17 02:00:44.594756 | PLAY RECAP 2025-05-17 02:00:44.594958 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-17 02:00:44.595039 | 2025-05-17 02:00:44.737942 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-17 02:00:44.739692 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-17 02:00:45.552186 | 2025-05-17 02:00:45.552368 | PLAY [Base post-fetch] 2025-05-17 02:00:45.568690 | 2025-05-17 02:00:45.568871 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-17 02:00:45.665175 | orchestrator | skipping: Conditional result was False 2025-05-17 02:00:45.681236 | 2025-05-17 02:00:45.681480 | TASK [fetch-output : Set log path for single node] 2025-05-17 02:00:45.723341 | orchestrator | ok 2025-05-17 02:00:45.729400 | 2025-05-17 02:00:45.729529 | LOOP [fetch-output : Ensure local output dirs] 2025-05-17 02:00:46.248347 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/638702988b704f0fb3d99ff5b9aee4e6/work/logs" 2025-05-17 02:00:46.531275 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/638702988b704f0fb3d99ff5b9aee4e6/work/artifacts" 2025-05-17 02:00:46.811296 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/638702988b704f0fb3d99ff5b9aee4e6/work/docs" 2025-05-17 02:00:46.831721 | 2025-05-17 02:00:46.831917 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-17 02:00:47.851719 | orchestrator | changed: .d..t...... ./ 2025-05-17 02:00:47.852002 | orchestrator | changed: All items complete 2025-05-17 02:00:47.852040 | 2025-05-17 02:00:48.681040 | orchestrator | changed: .d..t...... ./ 2025-05-17 02:00:49.421337 | orchestrator | changed: .d..t...... ./ 2025-05-17 02:00:49.450006 | 2025-05-17 02:00:49.450207 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-17 02:00:49.491878 | orchestrator | skipping: Conditional result was False 2025-05-17 02:00:49.495275 | orchestrator | skipping: Conditional result was False 2025-05-17 02:00:49.514590 | 2025-05-17 02:00:49.514740 | PLAY RECAP 2025-05-17 02:00:49.514818 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-17 02:00:49.514909 | 2025-05-17 02:00:49.658000 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-17 02:00:49.663368 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-17 02:00:50.455230 | 2025-05-17 02:00:50.455410 | PLAY [Base post] 2025-05-17 02:00:50.474440 | 2025-05-17 02:00:50.474603 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-17 02:00:51.492427 | orchestrator | changed 2025-05-17 02:00:51.502905 | 2025-05-17 02:00:51.503029 | PLAY RECAP 2025-05-17 02:00:51.503107 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-17 02:00:51.503185 | 2025-05-17 02:00:51.628786 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-17 02:00:51.631227 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-17 02:00:52.478756 | 2025-05-17 02:00:52.479036 | PLAY [Base post-logs] 2025-05-17 02:00:52.491129 | 2025-05-17 02:00:52.491305 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-17 02:00:53.011578 | localhost | changed 2025-05-17 02:00:53.029300 | 2025-05-17 02:00:53.029521 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-17 02:00:53.067710 | localhost | ok 2025-05-17 02:00:53.074040 | 2025-05-17 02:00:53.074203 | TASK [Set zuul-log-path fact] 2025-05-17 02:00:53.103437 | localhost | ok 2025-05-17 02:00:53.119262 | 2025-05-17 02:00:53.119411 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-17 02:00:53.165542 | localhost | ok 2025-05-17 02:00:53.172238 | 2025-05-17 02:00:53.172415 | TASK [upload-logs : Create log directories] 2025-05-17 02:00:53.698069 | localhost | changed 2025-05-17 02:00:53.700978 | 2025-05-17 02:00:53.701095 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-17 02:00:54.201555 | localhost -> localhost | ok: Runtime: 0:00:00.007211 2025-05-17 02:00:54.206046 | 2025-05-17 02:00:54.206171 | TASK [upload-logs : Upload logs to log server] 2025-05-17 02:00:54.793209 | localhost | Output suppressed because no_log was given 2025-05-17 02:00:54.795085 | 2025-05-17 02:00:54.795191 | LOOP [upload-logs : Compress console log and json output] 2025-05-17 02:00:54.851526 | localhost | skipping: Conditional result was False 2025-05-17 02:00:54.858105 | localhost | skipping: Conditional result was False 2025-05-17 02:00:54.870765 | 2025-05-17 02:00:54.871047 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-17 02:00:54.936074 | localhost | skipping: Conditional result was False 2025-05-17 02:00:54.936616 | 2025-05-17 02:00:54.939457 | localhost | skipping: Conditional result was False 2025-05-17 02:00:54.953964 | 2025-05-17 02:00:54.954202 | LOOP [upload-logs : Upload console log and json output]